omgkit 2.11.0 → 2.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,7 +7,7 @@
7
7
  [![License](https://img.shields.io/badge/license-MIT-blue)](LICENSE)
8
8
 
9
9
  > **AI Team System for Claude Code**
10
- > 23 Agents • 58 Commands • 29 Workflows • 88 Skills • 10 Modes
10
+ > 23 Agents • 58 Commands • 29 Workflows • 88 Skills • 10 Modes • 12 Archetypes
11
11
  > *"Think Omega. Build Omega. Be Omega."*
12
12
 
13
13
  OMGKIT transforms Claude Code into an autonomous AI development team with sprint management, specialized agents, and Omega-level thinking for 10x-1000x productivity improvements.
@@ -21,8 +21,10 @@ OMGKIT transforms Claude Code into an autonomous AI development team with sprint
21
21
  | **Workflows** | 29 | Complete development processes |
22
22
  | **Skills** | 88 | Domain expertise modules |
23
23
  | **Modes** | 10 | Behavioral configurations |
24
+ | **Archetypes** | 12 | Project templates for autonomous dev |
24
25
  | **Sprint Management** | ✅ | Vision, backlog, team autonomy |
25
26
  | **Omega Thinking** | ✅ | 7 modes for 10x-1000x solutions |
27
+ | **Autonomous Dev** | ✅ | Build complete apps from idea to deploy |
26
28
 
27
29
  ## 🚀 Installation
28
30
 
@@ -164,6 +166,49 @@ After installation, use these commands in Claude Code:
164
166
  /team:status # Show team activity
165
167
  ```
166
168
 
169
+ ### Autonomous Development
170
+ ```bash
171
+ /auto:init <idea> # Start discovery for new project
172
+ /auto:start # Begin/continue autonomous execution
173
+ /auto:status # Check project progress
174
+ /auto:approve # Approve checkpoint to continue
175
+ /auto:reject # Request changes with feedback
176
+ /auto:resume # Resume from saved state
177
+ ```
178
+
179
+ ## 🤖 Autonomous Development (12 Archetypes)
180
+
181
+ Build complete applications autonomously from idea to deployment.
182
+
183
+ | Archetype | Description |
184
+ |-----------|-------------|
185
+ | **SaaS MVP** | Multi-tenant SaaS with auth, payments |
186
+ | **API Service** | Backend APIs for web/mobile apps |
187
+ | **CLI Tool** | Command-line utilities |
188
+ | **Library/SDK** | Reusable npm packages |
189
+ | **Full-Stack App** | Complete web applications |
190
+ | **Mobile App** | iOS/Android with React Native |
191
+ | **AI-Powered App** | LLM apps with RAG, function calling |
192
+ | **AI Model Building** | ML model training pipelines |
193
+ | **Desktop App** | Electron cross-platform apps |
194
+ | **IoT App** | Device management, real-time data |
195
+ | **Game** | Unity/Godot game development |
196
+ | **Simulation** | Scientific/engineering simulations |
197
+
198
+ ### Artifacts System
199
+
200
+ Provide project context through artifacts:
201
+
202
+ ```
203
+ .omgkit/artifacts/
204
+ ├── data/ # Sample data, schemas
205
+ ├── docs/ # Requirements, user stories
206
+ ├── knowledge/ # Glossary, business rules
207
+ ├── research/ # Competitor analysis
208
+ ├── assets/ # Images, templates
209
+ └── examples/ # Code samples
210
+ ```
211
+
167
212
  ## 📋 Workflows (29)
168
213
 
169
214
  Workflows are orchestrated sequences of agents, commands, and skills that guide complete development processes.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "omgkit",
3
- "version": "2.11.0",
3
+ "version": "2.12.0",
4
4
  "description": "Omega-Level Development Kit - AI Team System for Claude Code. 23 agents, 58 commands, 88 skills, sprint management.",
5
5
  "keywords": [
6
6
  "claude-code",
@@ -0,0 +1,443 @@
1
+ name: "AI Model Building"
2
+ id: ai-model-building
3
+ description: "ML/AI model development with training pipelines, experiment tracking, and model deployment"
4
+ estimated_duration: "4-8 weeks"
5
+ icon: "brain"
6
+
7
+ # Default technology recommendations
8
+ defaults:
9
+ framework: pytorch
10
+ experiment_tracking: wandb
11
+ data_versioning: dvc
12
+ model_registry: mlflow
13
+ compute: aws_sagemaker
14
+ serving: vllm
15
+ language: python
16
+
17
+ # Alternative technology stacks
18
+ alternatives:
19
+ framework:
20
+ - id: pytorch
21
+ name: "PyTorch"
22
+ description: "Most flexible, research-friendly"
23
+ - id: tensorflow
24
+ name: "TensorFlow"
25
+ description: "Production-ready, TFX ecosystem"
26
+ - id: jax
27
+ name: "JAX"
28
+ description: "High-performance, functional approach"
29
+ - id: huggingface
30
+ name: "Hugging Face Transformers"
31
+ description: "Best for NLP/LLM work"
32
+
33
+ experiment_tracking:
34
+ - id: wandb
35
+ name: "Weights & Biases"
36
+ description: "Most popular, excellent visualization"
37
+ - id: mlflow
38
+ name: "MLflow"
39
+ description: "Open-source, self-hosted option"
40
+ - id: comet
41
+ name: "Comet ML"
42
+ description: "Good for teams"
43
+ - id: neptune
44
+ name: "Neptune.ai"
45
+ description: "Lightweight, flexible"
46
+
47
+ compute:
48
+ - id: aws_sagemaker
49
+ name: "AWS SageMaker"
50
+ description: "Managed ML platform"
51
+ - id: gcp_vertex
52
+ name: "GCP Vertex AI"
53
+ description: "Google's ML platform"
54
+ - id: azure_ml
55
+ name: "Azure ML"
56
+ description: "Microsoft's ML platform"
57
+ - id: local_gpu
58
+ name: "Local GPU"
59
+ description: "On-premise training"
60
+
61
+ # Phases of development
62
+ phases:
63
+ - id: discovery
64
+ name: "Problem Definition"
65
+ description: "Define ML problem, success metrics, and data availability"
66
+ order: 1
67
+ checkpoint: true
68
+ checkpoint_message: |
69
+ Problem definition complete. Review:
70
+ - ML task definition and approach
71
+ - Success metrics and baselines
72
+ - Data availability and quality
73
+
74
+ Approve to proceed with data engineering.
75
+
76
+ steps:
77
+ - id: problem_definition
78
+ name: "Problem Definition"
79
+ agent: planner
80
+ description: "Define the ML problem clearly"
81
+
82
+ - id: success_metrics
83
+ name: "Success Metrics"
84
+ agent: planner
85
+ description: "Define metrics and baselines"
86
+
87
+ - id: data_audit
88
+ name: "Data Audit"
89
+ agent: researcher
90
+ description: "Audit available data sources"
91
+
92
+ - id: feasibility
93
+ name: "Feasibility Analysis"
94
+ agent: architect
95
+ description: "Assess technical feasibility"
96
+
97
+ outputs:
98
+ - ".omgkit/generated/ml-problem-definition.md"
99
+ - ".omgkit/generated/metrics-baseline.md"
100
+ - ".omgkit/generated/data-audit.md"
101
+
102
+ - id: data_engineering
103
+ name: "Data Engineering"
104
+ description: "Collect, clean, and version data"
105
+ order: 2
106
+ checkpoint: true
107
+ checkpoint_message: |
108
+ Data engineering complete. Review:
109
+ - Data pipeline
110
+ - Data quality metrics
111
+ - Train/val/test splits
112
+
113
+ Approve to begin exploration.
114
+
115
+ steps:
116
+ - id: data_collection
117
+ name: "Data Collection"
118
+ agent: fullstack-developer
119
+ description: "Set up data collection pipelines"
120
+
121
+ - id: data_cleaning
122
+ name: "Data Cleaning"
123
+ agent: fullstack-developer
124
+ description: "Clean and preprocess data"
125
+
126
+ - id: data_versioning
127
+ name: "Data Versioning"
128
+ agent: fullstack-developer
129
+ description: "Set up DVC for data versioning"
130
+
131
+ - id: data_splits
132
+ name: "Data Splits"
133
+ agent: fullstack-developer
134
+ description: "Create train/val/test splits"
135
+
136
+ - id: data_validation
137
+ name: "Data Validation"
138
+ agent: tester
139
+ description: "Validate data quality"
140
+
141
+ outputs:
142
+ - "data/"
143
+ - "dvc.yaml"
144
+ - "src/data/"
145
+
146
+ - id: exploration
147
+ name: "Exploration"
148
+ description: "EDA, baseline models, and feature engineering"
149
+ order: 3
150
+ checkpoint: true
151
+ checkpoint_message: |
152
+ Exploration complete. Review:
153
+ - EDA findings
154
+ - Baseline model results
155
+ - Feature engineering approach
156
+
157
+ Approve to begin model development.
158
+
159
+ steps:
160
+ - id: eda
161
+ name: "Exploratory Data Analysis"
162
+ agent: researcher
163
+ description: "Analyze data distributions and patterns"
164
+
165
+ - id: baseline
166
+ name: "Baseline Models"
167
+ agent: fullstack-developer
168
+ description: "Train simple baseline models"
169
+
170
+ - id: feature_engineering
171
+ name: "Feature Engineering"
172
+ agent: fullstack-developer
173
+ description: "Engineer features from raw data"
174
+
175
+ - id: experiment_setup
176
+ name: "Experiment Setup"
177
+ agent: fullstack-developer
178
+ description: "Set up experiment tracking"
179
+
180
+ outputs:
181
+ - "notebooks/eda.ipynb"
182
+ - "notebooks/baseline.ipynb"
183
+ - "src/features/"
184
+
185
+ - id: model_development
186
+ name: "Model Development"
187
+ description: "Design and implement model architecture"
188
+ order: 4
189
+ checkpoint: true
190
+ checkpoint_message: |
191
+ Model architecture complete. Review:
192
+ - Model architecture design
193
+ - Training configuration
194
+ - Resource requirements
195
+
196
+ Approve to begin training.
197
+
198
+ steps:
199
+ - id: architecture_design
200
+ name: "Architecture Design"
201
+ agent: architect
202
+ description: "Design model architecture"
203
+
204
+ - id: model_implementation
205
+ name: "Model Implementation"
206
+ agent: fullstack-developer
207
+ description: "Implement model in code"
208
+
209
+ - id: training_pipeline
210
+ name: "Training Pipeline"
211
+ agent: fullstack-developer
212
+ description: "Build training pipeline"
213
+
214
+ - id: config_system
215
+ name: "Config System"
216
+ agent: fullstack-developer
217
+ description: "Set up hyperparameter configuration"
218
+
219
+ outputs:
220
+ - "src/models/"
221
+ - "src/training/"
222
+ - "configs/"
223
+
224
+ - id: training
225
+ name: "Training"
226
+ description: "Train models with hyperparameter tuning"
227
+ order: 5
228
+ checkpoint: true
229
+ checkpoint_message: |
230
+ Training complete. Review:
231
+ - Training metrics and curves
232
+ - Best hyperparameters found
233
+ - Model checkpoints
234
+
235
+ Approve to proceed with evaluation.
236
+
237
+ steps:
238
+ - id: initial_training
239
+ name: "Initial Training"
240
+ agent: fullstack-developer
241
+ description: "Run initial training experiments"
242
+
243
+ - id: hyperparameter_tuning
244
+ name: "Hyperparameter Tuning"
245
+ agent: fullstack-developer
246
+ description: "Tune hyperparameters"
247
+
248
+ - id: distributed_training
249
+ name: "Distributed Training"
250
+ agent: fullstack-developer
251
+ description: "Scale training if needed"
252
+
253
+ - id: checkpoint_management
254
+ name: "Checkpoint Management"
255
+ agent: fullstack-developer
256
+ description: "Manage model checkpoints"
257
+
258
+ outputs:
259
+ - "checkpoints/"
260
+ - "wandb/"
261
+ - "results/"
262
+
263
+ - id: evaluation
264
+ name: "Evaluation"
265
+ description: "Comprehensive model evaluation"
266
+ order: 6
267
+ checkpoint: true
268
+ checkpoint_message: |
269
+ Evaluation complete. Review:
270
+ - Evaluation metrics
271
+ - Bias analysis
272
+ - Error analysis
273
+
274
+ Approve to proceed with deployment.
275
+
276
+ steps:
277
+ - id: metrics_evaluation
278
+ name: "Metrics Evaluation"
279
+ agent: tester
280
+ description: "Calculate evaluation metrics"
281
+
282
+ - id: bias_testing
283
+ name: "Bias Testing"
284
+ agent: tester
285
+ description: "Test for model bias"
286
+
287
+ - id: interpretability
288
+ name: "Interpretability"
289
+ agent: researcher
290
+ description: "Analyze model interpretability"
291
+
292
+ - id: error_analysis
293
+ name: "Error Analysis"
294
+ agent: tester
295
+ description: "Analyze failure cases"
296
+
297
+ outputs:
298
+ - "reports/evaluation.md"
299
+ - "reports/bias-analysis.md"
300
+ - "reports/errors/"
301
+
302
+ - id: deployment
303
+ name: "Model Deployment"
304
+ description: "Deploy model for inference"
305
+ order: 7
306
+ checkpoint: true
307
+ checkpoint_message: |
308
+ Deployment preparation complete. Review:
309
+ - Inference optimization
310
+ - Serving configuration
311
+ - API design
312
+
313
+ Approve for production deployment.
314
+
315
+ steps:
316
+ - id: model_optimization
317
+ name: "Model Optimization"
318
+ agent: fullstack-developer
319
+ description: "Optimize model for inference"
320
+
321
+ - id: serving_setup
322
+ name: "Serving Setup"
323
+ agent: cicd-manager
324
+ description: "Set up model serving"
325
+
326
+ - id: api_implementation
327
+ name: "API Implementation"
328
+ agent: fullstack-developer
329
+ description: "Build inference API"
330
+
331
+ - id: load_testing
332
+ name: "Load Testing"
333
+ agent: tester
334
+ description: "Test inference performance"
335
+
336
+ outputs:
337
+ - "src/serving/"
338
+ - "src/api/"
339
+ - "Dockerfile"
340
+
341
+ - id: monitoring
342
+ name: "Production Monitoring"
343
+ description: "Set up drift detection and retraining"
344
+ order: 8
345
+ checkpoint: true
346
+ checkpoint_message: |
347
+ Monitoring setup complete. Review:
348
+ - Drift detection configuration
349
+ - Alerting setup
350
+ - Retraining triggers
351
+
352
+ This is the final checkpoint.
353
+
354
+ steps:
355
+ - id: drift_detection
356
+ name: "Drift Detection"
357
+ agent: fullstack-developer
358
+ description: "Implement drift detection"
359
+
360
+ - id: monitoring_dashboard
361
+ name: "Monitoring Dashboard"
362
+ agent: fullstack-developer
363
+ description: "Build monitoring dashboard"
364
+
365
+ - id: alerting
366
+ name: "Alerting"
367
+ agent: cicd-manager
368
+ description: "Set up alerts"
369
+
370
+ - id: retraining_pipeline
371
+ name: "Retraining Pipeline"
372
+ agent: fullstack-developer
373
+ description: "Set up automated retraining"
374
+
375
+ outputs:
376
+ - "src/monitoring/"
377
+ - "src/retraining/"
378
+
379
+ # Autonomy rules for this archetype
380
+ autonomy_rules:
381
+ - pattern: "**/models/**"
382
+ level: 3
383
+ reason: "Model architecture needs review"
384
+ - pattern: "**/training/**"
385
+ level: 2
386
+ reason: "Training code needs quick review"
387
+ - pattern: "**/data/**"
388
+ level: 2
389
+ reason: "Data processing needs review"
390
+ - pattern: "configs/**"
391
+ level: 2
392
+ reason: "Configs affect training outcomes"
393
+ - pattern: "**/serving/**"
394
+ level: 3
395
+ reason: "Serving infrastructure is critical"
396
+ - pattern: "dvc.yaml"
397
+ level: 3
398
+ reason: "Data versioning config is important"
399
+
400
+ # Quality gates
401
+ quality_gates:
402
+ after_feature:
403
+ - "pytest tests/"
404
+ - "mypy src/"
405
+ - "black --check src/"
406
+ before_checkpoint:
407
+ - "data validation passes"
408
+ - "model validation passes"
409
+ before_deploy:
410
+ - "evaluation metrics >= baseline"
411
+ - "bias tests pass"
412
+ - "load test passes"
413
+
414
+ # ML-specific discovery questions
415
+ discovery_additions:
416
+ - category: "ML Task"
417
+ questions:
418
+ - "What type of ML task? (classification, regression, generation, etc.)"
419
+ - "What's the input data format?"
420
+ - "What's the expected output?"
421
+ - "Is this supervised, unsupervised, or reinforcement learning?"
422
+
423
+ - category: "Data"
424
+ questions:
425
+ - "How much labeled data do you have?"
426
+ - "How is the data currently stored?"
427
+ - "Are there any data privacy requirements?"
428
+ - "How often does the data change?"
429
+ - "What's the data quality like?"
430
+
431
+ - category: "Performance Requirements"
432
+ questions:
433
+ - "What's the target accuracy/performance metric?"
434
+ - "What's the acceptable inference latency?"
435
+ - "What's the expected query volume?"
436
+ - "Are there model size constraints?"
437
+
438
+ - category: "Infrastructure"
439
+ questions:
440
+ - "What compute resources are available for training?"
441
+ - "Where will the model be deployed?"
442
+ - "Do you need real-time or batch inference?"
443
+ - "What's the MLOps maturity level?"