oh-my-customcode 0.6.2 → 0.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +30 -12
- package/dist/cli/index.js +1 -0
- package/dist/index.js +17 -0
- package/package.json +4 -4
- package/templates/.claude/agents/db-postgres-expert.md +106 -0
- package/templates/.claude/agents/db-redis-expert.md +101 -0
- package/templates/.claude/agents/de-airflow-expert.md +71 -0
- package/templates/.claude/agents/de-dbt-expert.md +72 -0
- package/templates/.claude/agents/de-kafka-expert.md +81 -0
- package/templates/.claude/agents/de-pipeline-expert.md +92 -0
- package/templates/.claude/agents/de-snowflake-expert.md +89 -0
- package/templates/.claude/agents/de-spark-expert.md +80 -0
- package/templates/.claude/rules/SHOULD-agent-teams.md +47 -1
- package/templates/.claude/skills/airflow-best-practices/SKILL.md +56 -0
- package/templates/.claude/skills/dbt-best-practices/SKILL.md +54 -0
- package/templates/.claude/skills/de-lead-routing/SKILL.md +230 -0
- package/templates/.claude/skills/dev-lead-routing/SKILL.md +15 -0
- package/templates/.claude/skills/kafka-best-practices/SKILL.md +52 -0
- package/templates/.claude/skills/monitoring-setup/SKILL.md +115 -0
- package/templates/.claude/skills/pipeline-architecture-patterns/SKILL.md +83 -0
- package/templates/.claude/skills/postgres-best-practices/SKILL.md +66 -0
- package/templates/.claude/skills/redis-best-practices/SKILL.md +83 -0
- package/templates/.claude/skills/secretary-routing/SKILL.md +12 -0
- package/templates/.claude/skills/snowflake-best-practices/SKILL.md +65 -0
- package/templates/.claude/skills/spark-best-practices/SKILL.md +52 -0
- package/templates/CLAUDE.md.en +8 -5
- package/templates/CLAUDE.md.ko +8 -5
- package/templates/guides/airflow/README.md +32 -0
- package/templates/guides/dbt/README.md +32 -0
- package/templates/guides/iceberg/README.md +49 -0
- package/templates/guides/kafka/README.md +32 -0
- package/templates/guides/postgres/README.md +58 -0
- package/templates/guides/redis/README.md +50 -0
- package/templates/guides/snowflake/README.md +32 -0
- package/templates/guides/spark/README.md +32 -0
|
@@ -18,6 +18,9 @@ Routes development tasks to appropriate language and framework expert agents. Th
|
|
|
18
18
|
| sw-engineer/frontend | fe-vercel-agent, fe-vuejs-agent, fe-svelte-agent | Frontend frameworks |
|
|
19
19
|
| sw-engineer/backend | be-fastapi-expert, be-springboot-expert, be-go-backend-expert, be-nestjs-expert, be-express-expert | Backend frameworks |
|
|
20
20
|
| sw-engineer/tooling | tool-npm-expert, tool-optimizer, tool-bun-expert | Build tools and optimization |
|
|
21
|
+
| sw-engineer/database | db-supabase-expert, db-postgres-expert, db-redis-expert | Database design and optimization |
|
|
22
|
+
| sw-architect | arch-documenter, arch-speckit-agent | Architecture documentation and spec-driven development |
|
|
23
|
+
| infra-engineer | infra-docker-expert, infra-aws-expert | Container and cloud infrastructure |
|
|
21
24
|
|
|
22
25
|
## Language/Framework Detection
|
|
23
26
|
|
|
@@ -34,6 +37,11 @@ Routes development tasks to appropriate language and framework expert agents. Th
|
|
|
34
37
|
| `.js`, `.jsx` (React) | fe-vercel-agent | React/Next.js |
|
|
35
38
|
| `.vue` | fe-vuejs-agent | Vue.js |
|
|
36
39
|
| `.svelte` | fe-svelte-agent | Svelte |
|
|
40
|
+
| `.sql` (PostgreSQL) | db-postgres-expert | PostgreSQL |
|
|
41
|
+
| `.sql` (Supabase) | db-supabase-expert | Supabase PostgreSQL |
|
|
42
|
+
| `Dockerfile`, `*.dockerfile` | infra-docker-expert | Docker |
|
|
43
|
+
| `*.tf`, `*.tfvars` | infra-aws-expert | Terraform/IaC |
|
|
44
|
+
| `*.yaml`, `*.yml` (CloudFormation) | infra-aws-expert | AWS CloudFormation |
|
|
37
45
|
|
|
38
46
|
### Keyword Mapping
|
|
39
47
|
|
|
@@ -55,6 +63,13 @@ Routes development tasks to appropriate language and framework expert agents. Th
|
|
|
55
63
|
| "npm" | tool-npm-expert |
|
|
56
64
|
| "optimize", "bundle" | tool-optimizer |
|
|
57
65
|
| "bun" | tool-bun-expert |
|
|
66
|
+
| "postgres", "postgresql", "pg_stat", "psql" | db-postgres-expert |
|
|
67
|
+
| "redis", "cache", "pub/sub", "sorted set" | db-redis-expert |
|
|
68
|
+
| "supabase", "rls", "edge function" | db-supabase-expert |
|
|
69
|
+
| "docker", "dockerfile", "container", "compose" | infra-docker-expert |
|
|
70
|
+
| "aws", "cloudformation", "cdk", "terraform", "vpc", "iam", "s3", "lambda" | infra-aws-expert |
|
|
71
|
+
| "architecture", "adr", "openapi", "swagger", "diagram" | arch-documenter |
|
|
72
|
+
| "spec", "specification", "tdd", "requirements" | arch-speckit-agent |
|
|
58
73
|
|
|
59
74
|
## Command Routing
|
|
60
75
|
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: kafka-best-practices
|
|
3
|
+
description: Apache Kafka best practices for event streaming, topic design, and producer-consumer patterns
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Apache Kafka Best Practices
|
|
8
|
+
|
|
9
|
+
## Producer Patterns
|
|
10
|
+
|
|
11
|
+
### Idempotent Producer (CRITICAL)
|
|
12
|
+
- Enable `enable.idempotence=true`
|
|
13
|
+
- Prevents duplicate messages
|
|
14
|
+
- Requires `acks=all`, `retries > 0`, `max.in.flight.requests.per.connection <= 5`
|
|
15
|
+
|
|
16
|
+
### Exactly-Once Semantics
|
|
17
|
+
- Use transactional API: `initTransactions()`, `beginTransaction()`, `commitTransaction()`
|
|
18
|
+
- For exactly-once end-to-end processing
|
|
19
|
+
|
|
20
|
+
### Performance
|
|
21
|
+
- Batching: `linger.ms` (wait for batch to fill)
|
|
22
|
+
- Compression: `compression.type=snappy` or `lz4`
|
|
23
|
+
- `batch.size`: 16KB default, tune based on message size
|
|
24
|
+
|
|
25
|
+
## Consumer Patterns
|
|
26
|
+
|
|
27
|
+
### Offset Management
|
|
28
|
+
- Auto-commit: `enable.auto.commit=true` (at-least-once)
|
|
29
|
+
- Manual commit: `commitSync()` or `commitAsync()` (better control)
|
|
30
|
+
|
|
31
|
+
### Rebalancing
|
|
32
|
+
- Cooperative sticky assignor: minimal rebalancing disruption
|
|
33
|
+
- `session.timeout.ms` and `heartbeat.interval.ms` tuning
|
|
34
|
+
|
|
35
|
+
### At-Least-Once vs Exactly-Once
|
|
36
|
+
- At-least-once: default, idempotent processing required
|
|
37
|
+
- Exactly-once: transactional consumer + producer
|
|
38
|
+
|
|
39
|
+
## Topic Design
|
|
40
|
+
|
|
41
|
+
### Partitioning
|
|
42
|
+
- Partition count: based on throughput (MB/s ÷ partition throughput)
|
|
43
|
+
- Key-based partitioning for ordering guarantees
|
|
44
|
+
- More partitions = higher throughput (but more overhead)
|
|
45
|
+
|
|
46
|
+
### Retention
|
|
47
|
+
- Time-based: `retention.ms`
|
|
48
|
+
- Size-based: `retention.bytes`
|
|
49
|
+
- Log compaction: for changelog topics (`cleanup.policy=compact`)
|
|
50
|
+
|
|
51
|
+
## References
|
|
52
|
+
- [Kafka Documentation](https://kafka.apache.org/documentation/)
|
|
@@ -0,0 +1,115 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: monitoring-setup
|
|
3
|
+
description: Enable/disable OpenTelemetry console monitoring for Claude Code usage tracking
|
|
4
|
+
argument-hint: "[enable|disable|status]"
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Monitoring Setup Skill
|
|
8
|
+
|
|
9
|
+
Enable or disable OpenTelemetry console monitoring. When enabled, Claude Code outputs usage metrics (cost, tokens, sessions, LOC, commits, PRs, active time) and events (tool results, API requests) to the terminal.
|
|
10
|
+
|
|
11
|
+
## Natural Language Triggers
|
|
12
|
+
|
|
13
|
+
This skill activates when the user mentions any of:
|
|
14
|
+
- Korean: "모니터링", "텔레메트리", "사용량 추적", "메트릭", "모니터링 켜줘", "텔레메트리 활성화"
|
|
15
|
+
- English: "monitoring", "telemetry", "usage tracking", "metrics", "enable monitoring"
|
|
16
|
+
- Combined with actions: "켜", "끄", "활성화", "비활성화", "설정", "enable", "disable", "setup"
|
|
17
|
+
|
|
18
|
+
## Commands
|
|
19
|
+
|
|
20
|
+
### enable (default)
|
|
21
|
+
|
|
22
|
+
1. Read `.claude/settings.local.json` (create if not exists)
|
|
23
|
+
2. Add or update `env` field with:
|
|
24
|
+
```json
|
|
25
|
+
{
|
|
26
|
+
"env": {
|
|
27
|
+
"CLAUDE_CODE_ENABLE_TELEMETRY": "1",
|
|
28
|
+
"OTEL_METRICS_EXPORTER": "console",
|
|
29
|
+
"OTEL_LOGS_EXPORTER": "console"
|
|
30
|
+
}
|
|
31
|
+
}
|
|
32
|
+
```
|
|
33
|
+
3. Preserve all existing settings
|
|
34
|
+
4. Report to user:
|
|
35
|
+
```
|
|
36
|
+
[Done] OpenTelemetry Console Monitoring enabled
|
|
37
|
+
|
|
38
|
+
Configured in: .claude/settings.local.json
|
|
39
|
+
Metrics: sessions, cost, tokens, LOC, commits, PRs, active time
|
|
40
|
+
Events: tool results, API requests, tool decisions
|
|
41
|
+
|
|
42
|
+
Note: Takes effect on next `claude` session restart.
|
|
43
|
+
To disable: /monitoring-setup disable
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
### disable
|
|
47
|
+
|
|
48
|
+
1. Read `.claude/settings.local.json`
|
|
49
|
+
2. Remove OTel-related keys from `env`:
|
|
50
|
+
- `CLAUDE_CODE_ENABLE_TELEMETRY`
|
|
51
|
+
- `OTEL_METRICS_EXPORTER`
|
|
52
|
+
- `OTEL_LOGS_EXPORTER`
|
|
53
|
+
3. If `env` object becomes empty, remove `env` field entirely
|
|
54
|
+
4. Report:
|
|
55
|
+
```
|
|
56
|
+
[Done] OpenTelemetry Monitoring disabled
|
|
57
|
+
|
|
58
|
+
Removed from: .claude/settings.local.json
|
|
59
|
+
Takes effect on next session restart.
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### status
|
|
63
|
+
|
|
64
|
+
1. Read `.claude/settings.local.json`
|
|
65
|
+
2. Check for OTel env vars
|
|
66
|
+
3. Report current state:
|
|
67
|
+
```
|
|
68
|
+
[Monitoring Status]
|
|
69
|
+
├── Enabled: Yes/No
|
|
70
|
+
├── Metrics exporter: console / otlp / none
|
|
71
|
+
├── Logs exporter: console / otlp / none
|
|
72
|
+
└── Config: .claude/settings.local.json
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
## Implementation Notes
|
|
76
|
+
|
|
77
|
+
- `settings.local.json` is NOT git-tracked (local to user)
|
|
78
|
+
- Each user enables monitoring independently
|
|
79
|
+
- No infrastructure required for console mode
|
|
80
|
+
- Metrics appear in stderr during Claude Code execution
|
|
81
|
+
- Default export interval: 60s for metrics, 5s for events
|
|
82
|
+
|
|
83
|
+
## Available Metrics
|
|
84
|
+
|
|
85
|
+
| Metric | Description | Unit |
|
|
86
|
+
|--------|-------------|------|
|
|
87
|
+
| `claude_code.session.count` | CLI sessions started | count |
|
|
88
|
+
| `claude_code.cost.usage` | Session cost | USD |
|
|
89
|
+
| `claude_code.token.usage` | Tokens used (input/output/cache) | tokens |
|
|
90
|
+
| `claude_code.lines_of_code.count` | Code lines modified (added/removed) | count |
|
|
91
|
+
| `claude_code.commit.count` | Git commits created | count |
|
|
92
|
+
| `claude_code.pull_request.count` | Pull requests created | count |
|
|
93
|
+
| `claude_code.active_time.total` | Active usage time | seconds |
|
|
94
|
+
|
|
95
|
+
## Available Events
|
|
96
|
+
|
|
97
|
+
| Event | Description |
|
|
98
|
+
|-------|-------------|
|
|
99
|
+
| `claude_code.tool_result` | Tool execution results with duration |
|
|
100
|
+
| `claude_code.api_request` | API request details with cost/tokens |
|
|
101
|
+
| `claude_code.api_error` | API error details |
|
|
102
|
+
| `claude_code.tool_decision` | Tool accept/reject decisions |
|
|
103
|
+
| `claude_code.user_prompt` | User prompt metadata (content redacted by default) |
|
|
104
|
+
|
|
105
|
+
## Upgrade Path
|
|
106
|
+
|
|
107
|
+
For production monitoring, upgrade from console to OTLP:
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
# In settings.local.json env:
|
|
111
|
+
OTEL_METRICS_EXPORTER=otlp
|
|
112
|
+
OTEL_LOGS_EXPORTER=otlp
|
|
113
|
+
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
|
|
114
|
+
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
|
|
115
|
+
```
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: pipeline-architecture-patterns
|
|
3
|
+
description: Data pipeline architecture patterns for ETL/ELT design, orchestration, and data quality frameworks
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Data Pipeline Architecture Patterns
|
|
8
|
+
|
|
9
|
+
## Pipeline Architectures
|
|
10
|
+
|
|
11
|
+
### ETL vs ELT (CRITICAL)
|
|
12
|
+
- **ETL**: Extract → Transform (staging) → Load
|
|
13
|
+
- Traditional, on-premise data warehouses
|
|
14
|
+
- Pre-aggregation, complex transformations
|
|
15
|
+
- **ELT**: Extract → Load (raw) → Transform (in warehouse)
|
|
16
|
+
- Cloud warehouses (Snowflake, BigQuery)
|
|
17
|
+
- Leverage warehouse compute power
|
|
18
|
+
|
|
19
|
+
### Lambda Architecture
|
|
20
|
+
- Batch layer: historical data processing
|
|
21
|
+
- Speed layer: real-time stream processing
|
|
22
|
+
- Serving layer: merge batch + real-time views
|
|
23
|
+
- Complexity: maintain two codebases
|
|
24
|
+
|
|
25
|
+
### Kappa Architecture
|
|
26
|
+
- Stream-only processing
|
|
27
|
+
- Single codebase for batch + real-time
|
|
28
|
+
- Reprocessing via replay
|
|
29
|
+
- Simpler than Lambda
|
|
30
|
+
|
|
31
|
+
### Medallion Architecture
|
|
32
|
+
- **Bronze**: Raw data (append-only)
|
|
33
|
+
- **Silver**: Cleaned, conformed data
|
|
34
|
+
- **Gold**: Business-level aggregations
|
|
35
|
+
- Databricks pattern
|
|
36
|
+
|
|
37
|
+
## Orchestration Patterns
|
|
38
|
+
|
|
39
|
+
### DAG-Based Orchestration
|
|
40
|
+
- Airflow, Prefect, Dagster
|
|
41
|
+
- Task dependencies as DAG
|
|
42
|
+
- Retries, backfills, scheduling
|
|
43
|
+
|
|
44
|
+
### Event-Driven Orchestration
|
|
45
|
+
- Kafka, Pub/Sub triggers
|
|
46
|
+
- Real-time, low-latency
|
|
47
|
+
- Decoupled producers/consumers
|
|
48
|
+
|
|
49
|
+
### Hybrid Orchestration
|
|
50
|
+
- Scheduled batch + event-driven streams
|
|
51
|
+
- Example: Airflow DAG triggered by Kafka event
|
|
52
|
+
|
|
53
|
+
## Data Quality Frameworks
|
|
54
|
+
|
|
55
|
+
### Data Contracts (CRITICAL)
|
|
56
|
+
- Define schema, freshness, volume expectations
|
|
57
|
+
- Producer-consumer agreement
|
|
58
|
+
- Break build on violation
|
|
59
|
+
|
|
60
|
+
### Validation Frameworks
|
|
61
|
+
- **Great Expectations**: Python-based expectations
|
|
62
|
+
- **dbt tests**: SQL-based tests
|
|
63
|
+
- **Soda**: YAML-based checks
|
|
64
|
+
|
|
65
|
+
### Data Lineage
|
|
66
|
+
- Track data origin and transformations
|
|
67
|
+
- Debug data quality issues
|
|
68
|
+
- Compliance and auditing
|
|
69
|
+
|
|
70
|
+
## Idempotency Patterns
|
|
71
|
+
|
|
72
|
+
### Idempotent Design (CRITICAL)
|
|
73
|
+
- Same input → same output (no side effects)
|
|
74
|
+
- Upserts instead of inserts
|
|
75
|
+
- Partition replacement instead of append
|
|
76
|
+
|
|
77
|
+
### Deduplication
|
|
78
|
+
- Use unique keys
|
|
79
|
+
- Window-based deduplication
|
|
80
|
+
- Consumer group offset management
|
|
81
|
+
|
|
82
|
+
## References
|
|
83
|
+
- [Data Engineering Design Patterns](https://www.oreilly.com/library/view/data-engineering-design/9781098130725/)
|
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: postgres-best-practices
|
|
3
|
+
description: PostgreSQL best practices for database design, query optimization, and performance tuning
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# PostgreSQL Best Practices
|
|
8
|
+
|
|
9
|
+
## Query Optimization
|
|
10
|
+
|
|
11
|
+
### EXPLAIN ANALYZE (CRITICAL)
|
|
12
|
+
- Use `EXPLAIN ANALYZE` to understand query plans
|
|
13
|
+
- Identify slow operations: Seq Scan, Nested Loop
|
|
14
|
+
- Check row estimates vs actual rows
|
|
15
|
+
- Monitor buffers: shared hit vs read
|
|
16
|
+
|
|
17
|
+
### Indexing (CRITICAL)
|
|
18
|
+
- B-tree: default, most use cases
|
|
19
|
+
- GIN: JSONB, arrays, full-text search
|
|
20
|
+
- GiST: geometry, range types
|
|
21
|
+
- BRIN: large sequential tables (time-series)
|
|
22
|
+
- Partial indexes: filtered queries
|
|
23
|
+
- Covering indexes (INCLUDE): avoid heap fetches
|
|
24
|
+
|
|
25
|
+
### Index Maintenance
|
|
26
|
+
- Create indexes concurrently: `CREATE INDEX CONCURRENTLY`
|
|
27
|
+
- Monitor usage: `pg_stat_user_indexes`
|
|
28
|
+
- Remove unused indexes
|
|
29
|
+
- Reindex bloated indexes
|
|
30
|
+
|
|
31
|
+
## Table Design
|
|
32
|
+
|
|
33
|
+
### Partitioning (HIGH)
|
|
34
|
+
- Range partitioning: time-series data
|
|
35
|
+
- List partitioning: categorical data
|
|
36
|
+
- Hash partitioning: even distribution
|
|
37
|
+
- Declarative partitioning (PG 10+)
|
|
38
|
+
|
|
39
|
+
### Data Types
|
|
40
|
+
- Use appropriate types (int vs bigint, varchar vs text)
|
|
41
|
+
- JSONB for semi-structured data
|
|
42
|
+
- Arrays for multi-value columns
|
|
43
|
+
- UUIDs for distributed IDs
|
|
44
|
+
|
|
45
|
+
## Performance Tuning
|
|
46
|
+
|
|
47
|
+
### Vacuum and Autovacuum
|
|
48
|
+
- Autovacuum: default enabled
|
|
49
|
+
- Monitor bloat: `pg_stat_user_tables`
|
|
50
|
+
- Tune autovacuum thresholds
|
|
51
|
+
- Manual VACUUM for large updates
|
|
52
|
+
|
|
53
|
+
### Connection Pooling
|
|
54
|
+
- Use pgBouncer or PgPool
|
|
55
|
+
- Transaction pooling for short transactions
|
|
56
|
+
- Session pooling for long transactions
|
|
57
|
+
- Max connections: tune based on workload
|
|
58
|
+
|
|
59
|
+
### Configuration
|
|
60
|
+
- `shared_buffers`: 25% of RAM
|
|
61
|
+
- `work_mem`: per operation, tune carefully
|
|
62
|
+
- `effective_cache_size`: 50-75% of RAM
|
|
63
|
+
- `random_page_cost`: 1.1 for SSD
|
|
64
|
+
|
|
65
|
+
## References
|
|
66
|
+
- [PostgreSQL Performance Optimization](https://wiki.postgresql.org/wiki/Performance_Optimization)
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: redis-best-practices
|
|
3
|
+
description: Redis best practices for caching, data structures, and in-memory data architecture
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Redis Best Practices
|
|
8
|
+
|
|
9
|
+
## Caching Patterns
|
|
10
|
+
|
|
11
|
+
### Cache-Aside (CRITICAL)
|
|
12
|
+
- Read: check cache → miss → read DB → set cache
|
|
13
|
+
- Write: update DB → invalidate cache
|
|
14
|
+
- Best for read-heavy workloads
|
|
15
|
+
|
|
16
|
+
### Write-Through
|
|
17
|
+
- Write: update cache AND DB simultaneously
|
|
18
|
+
- Ensures consistency
|
|
19
|
+
- Higher write latency
|
|
20
|
+
|
|
21
|
+
### Write-Behind
|
|
22
|
+
- Write: update cache → async DB update
|
|
23
|
+
- Lowest latency
|
|
24
|
+
- Risk of data loss
|
|
25
|
+
|
|
26
|
+
## Data Structures
|
|
27
|
+
|
|
28
|
+
### String
|
|
29
|
+
- Simple key-value: `SET key value`
|
|
30
|
+
- Counters: `INCR`, `DECR`, `INCRBY`
|
|
31
|
+
- Bit operations: `SETBIT`, `BITCOUNT`
|
|
32
|
+
|
|
33
|
+
### Hash
|
|
34
|
+
- Object storage: `HSET user:1 name "Alice"`
|
|
35
|
+
- Partial updates: `HINCRBY user:1 visits 1`
|
|
36
|
+
- Memory efficient for small hashes
|
|
37
|
+
|
|
38
|
+
### List
|
|
39
|
+
- Queues: `LPUSH`, `RPOP` (FIFO)
|
|
40
|
+
- Stacks: `LPUSH`, `LPOP` (LIFO)
|
|
41
|
+
- Capped collections: `LTRIM`
|
|
42
|
+
|
|
43
|
+
### Set
|
|
44
|
+
- Unique collections: `SADD`, `SMEMBERS`
|
|
45
|
+
- Intersections: `SINTER`
|
|
46
|
+
- Random sampling: `SRANDMEMBER`
|
|
47
|
+
|
|
48
|
+
### Sorted Set
|
|
49
|
+
- Leaderboards: `ZADD`, `ZRANGEBYSCORE`
|
|
50
|
+
- Rate limiting: `ZADD timestamp`
|
|
51
|
+
- Priority queues
|
|
52
|
+
|
|
53
|
+
### Stream
|
|
54
|
+
- Event log: `XADD`, `XREAD`
|
|
55
|
+
- Consumer groups: `XGROUP`, `XACK`
|
|
56
|
+
- Pub/Sub with persistence
|
|
57
|
+
|
|
58
|
+
## Performance
|
|
59
|
+
|
|
60
|
+
### Memory Optimization
|
|
61
|
+
- Set maxmemory policy: `allkeys-lru`, `volatile-lru`
|
|
62
|
+
- Monitor memory: `INFO memory`
|
|
63
|
+
- Use appropriate data structures (Hash vs String)
|
|
64
|
+
|
|
65
|
+
### Pipelining
|
|
66
|
+
- Batch multiple commands
|
|
67
|
+
- Reduces round-trip latency
|
|
68
|
+
- Use for bulk operations
|
|
69
|
+
|
|
70
|
+
## High Availability
|
|
71
|
+
|
|
72
|
+
### Redis Cluster
|
|
73
|
+
- Horizontal scaling
|
|
74
|
+
- Automatic sharding (16384 slots)
|
|
75
|
+
- Multi-master replication
|
|
76
|
+
|
|
77
|
+
### Redis Sentinel
|
|
78
|
+
- Automatic failover
|
|
79
|
+
- Monitoring and notifications
|
|
80
|
+
- Configuration provider
|
|
81
|
+
|
|
82
|
+
## References
|
|
83
|
+
- [Redis Best Practices](https://redis.io/docs/manual/patterns/)
|
|
@@ -20,6 +20,9 @@ Routes agent management tasks to the appropriate manager agent. This skill conta
|
|
|
20
20
|
| mgr-gitnerd | Git operations | "commit", "push", "pr" |
|
|
21
21
|
| mgr-sync-checker | Sync verification | "sync check", "verify sync" |
|
|
22
22
|
| mgr-sauron | R017 auto-verification | "verify", "full check" |
|
|
23
|
+
| mgr-claude-code-bible | Claude Code spec compliance | "spec check", "verify compliance" |
|
|
24
|
+
| sys-memory-keeper | Memory operations | "save memory", "recall", "memory search" |
|
|
25
|
+
| sys-naggy | TODO management | "todo", "track tasks", "task list" |
|
|
23
26
|
|
|
24
27
|
## Command Routing
|
|
25
28
|
|
|
@@ -32,6 +35,9 @@ audit → mgr-supplier
|
|
|
32
35
|
git → mgr-gitnerd
|
|
33
36
|
sync → mgr-sync-checker
|
|
34
37
|
verify → mgr-sauron
|
|
38
|
+
spec → mgr-claude-code-bible
|
|
39
|
+
memory → sys-memory-keeper
|
|
40
|
+
todo → sys-naggy
|
|
35
41
|
batch → multiple (parallel)
|
|
36
42
|
```
|
|
37
43
|
|
|
@@ -48,6 +54,9 @@ batch → multiple (parallel)
|
|
|
48
54
|
- "git commit/push/pr" → mgr-gitnerd
|
|
49
55
|
- "sync check" → mgr-sync-checker
|
|
50
56
|
- "verify" → mgr-sauron
|
|
57
|
+
- "spec check" → mgr-claude-code-bible
|
|
58
|
+
- "save/recall memory" → sys-memory-keeper
|
|
59
|
+
- "todo/task list" → sys-naggy
|
|
51
60
|
3. Spawn Task with selected agent role
|
|
52
61
|
4. Monitor execution
|
|
53
62
|
5. Report result to user
|
|
@@ -92,6 +101,9 @@ Use Task tool's `model` parameter to optimize cost and performance:
|
|
|
92
101
|
| mgr-gitnerd | `sonnet` | Commit message quality |
|
|
93
102
|
| mgr-sync-checker | `haiku` | Fast verification |
|
|
94
103
|
| mgr-sauron | `sonnet` | Multi-round verification |
|
|
104
|
+
| mgr-claude-code-bible | `sonnet` | Spec compliance checks |
|
|
105
|
+
| sys-memory-keeper | `sonnet` | Memory operations, search |
|
|
106
|
+
| sys-naggy | `haiku` | Simple TODO tracking |
|
|
95
107
|
|
|
96
108
|
### Task Call Examples
|
|
97
109
|
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: snowflake-best-practices
|
|
3
|
+
description: Snowflake best practices for cloud data warehouse design, query optimization, and cost management
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Snowflake Best Practices
|
|
8
|
+
|
|
9
|
+
## Warehouse Design
|
|
10
|
+
|
|
11
|
+
### Sizing (CRITICAL)
|
|
12
|
+
- Start small (XS or S), scale up as needed
|
|
13
|
+
- Enable auto-scaling for concurrency
|
|
14
|
+
- Enable auto-suspend (1 minute idle)
|
|
15
|
+
- Separate warehouses for different workloads
|
|
16
|
+
|
|
17
|
+
### Multi-Cluster Warehouses
|
|
18
|
+
- Use for high concurrency (many users)
|
|
19
|
+
- Set min/max clusters based on load
|
|
20
|
+
- Scaling policy: Standard (default) or Economy
|
|
21
|
+
|
|
22
|
+
## Query Optimization
|
|
23
|
+
|
|
24
|
+
### Clustering Keys (CRITICAL)
|
|
25
|
+
- Define clustering keys for frequently filtered columns
|
|
26
|
+
- Improves micro-partition pruning
|
|
27
|
+
- Monitor clustering depth
|
|
28
|
+
- Automatic clustering: `ALTER TABLE ... CLUSTER BY (...)`
|
|
29
|
+
|
|
30
|
+
### Result Caching
|
|
31
|
+
- 24-hour cache for identical queries
|
|
32
|
+
- Use SHOW PARAMETERS to check cache status
|
|
33
|
+
- Bypass cache with query hint: `/*+ NO_RESULT_CACHE */`
|
|
34
|
+
|
|
35
|
+
### Materialized Views
|
|
36
|
+
- For repeated aggregations
|
|
37
|
+
- Automatically refreshed on base table changes
|
|
38
|
+
- Cost: storage + refresh compute
|
|
39
|
+
|
|
40
|
+
## Data Loading
|
|
41
|
+
|
|
42
|
+
### COPY INTO (CRITICAL)
|
|
43
|
+
- Batch load from stages (S3/GCS/Azure)
|
|
44
|
+
- File size: 100-250MB compressed (optimal)
|
|
45
|
+
- Use pattern matching for multiple files
|
|
46
|
+
|
|
47
|
+
### Snowpipe
|
|
48
|
+
- Continuous ingestion
|
|
49
|
+
- Event-driven (S3 notifications)
|
|
50
|
+
- Serverless compute
|
|
51
|
+
|
|
52
|
+
## Cost Optimization
|
|
53
|
+
|
|
54
|
+
### Resource Monitors
|
|
55
|
+
- Set credit quotas per warehouse
|
|
56
|
+
- Alerts and suspend actions
|
|
57
|
+
- Track consumption with WAREHOUSE_METERING_HISTORY
|
|
58
|
+
|
|
59
|
+
### Storage
|
|
60
|
+
- Use zero-copy cloning for dev/test
|
|
61
|
+
- Time travel retention: 1 day (standard), 90 days (enterprise)
|
|
62
|
+
- Fail-safe: 7 days (not configurable)
|
|
63
|
+
|
|
64
|
+
## References
|
|
65
|
+
- [Snowflake Best Practices](https://docs.snowflake.com/en/user-guide/best-practices)
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: spark-best-practices
|
|
3
|
+
description: Apache Spark best practices for PySpark and Scala distributed data processing
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Apache Spark Best Practices
|
|
8
|
+
|
|
9
|
+
## Performance Optimization
|
|
10
|
+
|
|
11
|
+
### Broadcast Joins (CRITICAL)
|
|
12
|
+
- Use `broadcast(small_df)` for small-large table joins
|
|
13
|
+
- Default broadcast threshold: 10MB (`spark.sql.autoBroadcastJoinThreshold`)
|
|
14
|
+
- Avoid broadcast for tables > 100MB
|
|
15
|
+
|
|
16
|
+
### Shuffles (CRITICAL)
|
|
17
|
+
- Minimize shuffles: expensive operations
|
|
18
|
+
- Use `coalesce()` to reduce partitions without shuffle
|
|
19
|
+
- Use `repartition()` only when necessary (causes shuffle)
|
|
20
|
+
- Predicate pushdown: filter before joins
|
|
21
|
+
|
|
22
|
+
### Caching
|
|
23
|
+
- Cache DataFrames used multiple times: `df.cache()` or `df.persist()`
|
|
24
|
+
- Choose storage level: MEMORY_ONLY, MEMORY_AND_DISK, DISK_ONLY
|
|
25
|
+
- Unpersist when done: `df.unpersist()`
|
|
26
|
+
|
|
27
|
+
## Resource Management
|
|
28
|
+
|
|
29
|
+
### Executor Configuration
|
|
30
|
+
- Executor memory: 80% of available memory per executor
|
|
31
|
+
- Executor cores: 4-5 cores per executor (optimal)
|
|
32
|
+
- Dynamic allocation: enable for varying workloads
|
|
33
|
+
|
|
34
|
+
### Partitioning
|
|
35
|
+
- Optimal partition size: 100-200MB
|
|
36
|
+
- Too few partitions: underutilized cluster
|
|
37
|
+
- Too many partitions: task overhead
|
|
38
|
+
|
|
39
|
+
## Data Processing
|
|
40
|
+
|
|
41
|
+
### UDFs
|
|
42
|
+
- Prefer built-in functions over UDFs
|
|
43
|
+
- Use Pandas UDF for vectorized operations
|
|
44
|
+
- Avoid Python UDFs (serialization overhead)
|
|
45
|
+
|
|
46
|
+
### Storage Formats
|
|
47
|
+
- Parquet: default for analytics (columnar, compression)
|
|
48
|
+
- ORC: alternative to Parquet
|
|
49
|
+
- Delta/Iceberg: ACID transactions, time travel
|
|
50
|
+
|
|
51
|
+
## References
|
|
52
|
+
- [Spark Performance Tuning](https://spark.apache.org/docs/latest/tuning.html)
|
package/templates/CLAUDE.md.en
CHANGED
|
@@ -151,6 +151,7 @@ Flow:
|
|
|
151
151
|
| `/dev-refactor` | Refactor code |
|
|
152
152
|
| `/memory-save` | Save session context to claude-mem |
|
|
153
153
|
| `/memory-recall` | Search and recall memories |
|
|
154
|
+
| `/monitoring-setup` | Enable/disable OTel console monitoring |
|
|
154
155
|
| `/npm-publish` | Publish package to npm registry |
|
|
155
156
|
| `/npm-version` | Manage semantic versions |
|
|
156
157
|
| `/npm-audit` | Audit dependencies |
|
|
@@ -168,12 +169,12 @@ Flow:
|
|
|
168
169
|
project/
|
|
169
170
|
+-- CLAUDE.md # Entry point
|
|
170
171
|
+-- .claude/
|
|
171
|
-
| +-- agents/ # Subagent definitions (
|
|
172
|
-
| +-- skills/ # Skills (
|
|
172
|
+
| +-- agents/ # Subagent definitions (42 files)
|
|
173
|
+
| +-- skills/ # Skills (51 directories)
|
|
173
174
|
| +-- rules/ # Global rules (R000-R018)
|
|
174
175
|
| +-- hooks/ # Hook scripts (memory, HUD)
|
|
175
176
|
| +-- contexts/ # Context files (ecomode)
|
|
176
|
-
+-- guides/ # Reference docs (
|
|
177
|
+
+-- guides/ # Reference docs (22 topics)
|
|
177
178
|
```
|
|
178
179
|
|
|
179
180
|
## Orchestration
|
|
@@ -181,6 +182,7 @@ project/
|
|
|
181
182
|
Orchestration is handled by routing skills in the main conversation:
|
|
182
183
|
- **secretary-routing**: Routes management tasks to manager agents
|
|
183
184
|
- **dev-lead-routing**: Routes development tasks to language/framework experts
|
|
185
|
+
- **de-lead-routing**: Routes data engineering tasks to DE/pipeline experts
|
|
184
186
|
- **qa-lead-routing**: Coordinates QA workflow
|
|
185
187
|
|
|
186
188
|
The main conversation acts as the sole orchestrator. Subagents cannot spawn other subagents.
|
|
@@ -193,13 +195,14 @@ The main conversation acts as the sole orchestrator. Subagents cannot spawn othe
|
|
|
193
195
|
| SW Engineer/Backend | 5 | be-fastapi-expert, be-springboot-expert, be-go-backend-expert, be-express-expert, be-nestjs-expert |
|
|
194
196
|
| SW Engineer/Frontend | 3 | fe-vercel-agent, fe-vuejs-agent, fe-svelte-agent |
|
|
195
197
|
| SW Engineer/Tooling | 3 | tool-npm-expert, tool-optimizer, tool-bun-expert |
|
|
196
|
-
|
|
|
198
|
+
| DE Engineer | 6 | de-airflow-expert, de-dbt-expert, de-spark-expert, de-kafka-expert, de-snowflake-expert, de-pipeline-expert |
|
|
199
|
+
| SW Engineer/Database | 3 | db-supabase-expert, db-postgres-expert, db-redis-expert |
|
|
197
200
|
| SW Architect | 2 | arch-documenter, arch-speckit-agent |
|
|
198
201
|
| Infra Engineer | 2 | infra-docker-expert, infra-aws-expert |
|
|
199
202
|
| QA Team | 3 | qa-planner, qa-writer, qa-engineer |
|
|
200
203
|
| Manager | 7 | mgr-creator, mgr-updater, mgr-supplier, mgr-gitnerd, mgr-sync-checker, mgr-sauron, mgr-claude-code-bible |
|
|
201
204
|
| System | 2 | sys-memory-keeper, sys-naggy |
|
|
202
|
-
| **Total** | **
|
|
205
|
+
| **Total** | **42** | |
|
|
203
206
|
|
|
204
207
|
## Agent Teams
|
|
205
208
|
|
package/templates/CLAUDE.md.ko
CHANGED
|
@@ -151,6 +151,7 @@ oh-my-customcode로 구동됩니다.
|
|
|
151
151
|
| `/dev-refactor` | 코드 리팩토링 |
|
|
152
152
|
| `/memory-save` | 세션 컨텍스트를 claude-mem에 저장 |
|
|
153
153
|
| `/memory-recall` | 메모리 검색 및 리콜 |
|
|
154
|
+
| `/monitoring-setup` | OTel 콘솔 모니터링 활성화/비활성화 |
|
|
154
155
|
| `/npm-publish` | npm 레지스트리에 패키지 배포 |
|
|
155
156
|
| `/npm-version` | 시맨틱 버전 관리 |
|
|
156
157
|
| `/npm-audit` | 의존성 감사 |
|
|
@@ -168,12 +169,12 @@ oh-my-customcode로 구동됩니다.
|
|
|
168
169
|
project/
|
|
169
170
|
+-- CLAUDE.md # 진입점
|
|
170
171
|
+-- .claude/
|
|
171
|
-
| +-- agents/ # 서브에이전트 정의 (
|
|
172
|
-
| +-- skills/ # 스킬 (
|
|
172
|
+
| +-- agents/ # 서브에이전트 정의 (42 파일)
|
|
173
|
+
| +-- skills/ # 스킬 (51 디렉토리)
|
|
173
174
|
| +-- rules/ # 전역 규칙 (R000-R018)
|
|
174
175
|
| +-- hooks/ # 훅 스크립트 (메모리, HUD)
|
|
175
176
|
| +-- contexts/ # 컨텍스트 파일 (ecomode)
|
|
176
|
-
+-- guides/ # 레퍼런스 문서 (
|
|
177
|
+
+-- guides/ # 레퍼런스 문서 (22 토픽)
|
|
177
178
|
```
|
|
178
179
|
|
|
179
180
|
## 오케스트레이션
|
|
@@ -181,6 +182,7 @@ project/
|
|
|
181
182
|
오케스트레이션은 메인 대화의 라우팅 스킬로 처리됩니다:
|
|
182
183
|
- **secretary-routing**: 매니저 에이전트로 관리 작업 라우팅
|
|
183
184
|
- **dev-lead-routing**: 언어/프레임워크 전문가에게 개발 작업 라우팅
|
|
185
|
+
- **de-lead-routing**: 데이터 엔지니어링 작업을 DE/파이프라인 전문가에게 라우팅
|
|
184
186
|
- **qa-lead-routing**: QA 워크플로우 조율
|
|
185
187
|
|
|
186
188
|
메인 대화가 유일한 오케스트레이터 역할을 합니다. 서브에이전트는 다른 서브에이전트를 생성할 수 없습니다.
|
|
@@ -193,13 +195,14 @@ project/
|
|
|
193
195
|
| SW Engineer/Backend | 5 | be-fastapi-expert, be-springboot-expert, be-go-backend-expert, be-express-expert, be-nestjs-expert |
|
|
194
196
|
| SW Engineer/Frontend | 3 | fe-vercel-agent, fe-vuejs-agent, fe-svelte-agent |
|
|
195
197
|
| SW Engineer/Tooling | 3 | tool-npm-expert, tool-optimizer, tool-bun-expert |
|
|
196
|
-
|
|
|
198
|
+
| DE Engineer | 6 | de-airflow-expert, de-dbt-expert, de-spark-expert, de-kafka-expert, de-snowflake-expert, de-pipeline-expert |
|
|
199
|
+
| SW Engineer/Database | 3 | db-supabase-expert, db-postgres-expert, db-redis-expert |
|
|
197
200
|
| SW Architect | 2 | arch-documenter, arch-speckit-agent |
|
|
198
201
|
| Infra Engineer | 2 | infra-docker-expert, infra-aws-expert |
|
|
199
202
|
| QA Team | 3 | qa-planner, qa-writer, qa-engineer |
|
|
200
203
|
| Manager | 7 | mgr-creator, mgr-updater, mgr-supplier, mgr-gitnerd, mgr-sync-checker, mgr-sauron, mgr-claude-code-bible |
|
|
201
204
|
| System | 2 | sys-memory-keeper, sys-naggy |
|
|
202
|
-
| **총계** | **
|
|
205
|
+
| **총계** | **42** | |
|
|
203
206
|
|
|
204
207
|
## Agent Teams
|
|
205
208
|
|