@quarri/claude-data-tools 1.0.2 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,338 @@
1
+ ---
2
+ description: Debug and heal failing data extraction connectors
3
+ globs:
4
+ alwaysApply: false
5
+ ---
6
+
7
+ # /quarri-debug-connector - Connector Healing
8
+
9
+ Debug and fix failing data extraction connectors by retrieving their code, running locally, identifying errors, and submitting healed versions.
10
+
11
+ ## When to Use
12
+
13
+ Use `/quarri-debug-connector` when:
14
+ - A scheduled extraction job is failing
15
+ - Users report data not updating
16
+ - Connector logs show errors
17
+ - Pipeline needs updates for API changes
18
+
19
+ ## Debugging Workflow
20
+
21
+ ### Step 1: Identify the Problem
22
+
23
+ Get information about the failing connector:
24
+
25
+ ```
26
+ quarri_get_connector_logs({
27
+ connector_id: "stripe_connector_123",
28
+ lines: 100
29
+ })
30
+ ```
31
+
32
+ This returns:
33
+ - Recent execution logs
34
+ - Error messages
35
+ - Last successful run timestamp
36
+ - Configuration details
37
+
38
+ ### Step 2: Retrieve Connector Code
39
+
40
+ Get the current connector source:
41
+
42
+ ```
43
+ quarri_get_connector_code({
44
+ connector_id: "stripe_connector_123"
45
+ })
46
+ ```
47
+
48
+ Returns the full Python pipeline code.
49
+
50
+ ### Step 3: Analyze the Error
51
+
52
+ Common error categories:
53
+
54
+ #### Authentication Errors
55
+ ```
56
+ 401 Unauthorized
57
+ 403 Forbidden
58
+ Invalid API key
59
+ Token expired
60
+ ```
61
+
62
+ **Diagnosis**: Check if API credentials are valid/expired
63
+
64
+ **Fix**: Update credentials or refresh OAuth tokens
65
+
66
+ #### API Changes
67
+ ```
68
+ 404 Not Found - endpoint /v1/old_endpoint
69
+ Field 'old_field' not found
70
+ ```
71
+
72
+ **Diagnosis**: API version changed or endpoint deprecated
73
+
74
+ **Fix**: Update endpoint paths and field mappings
75
+
76
+ #### Rate Limiting
77
+ ```
78
+ 429 Too Many Requests
79
+ Rate limit exceeded
80
+ ```
81
+
82
+ **Diagnosis**: Too many API calls
83
+
84
+ **Fix**: Add rate limiting, backoff logic
85
+
86
+ #### Data Format Changes
87
+ ```
88
+ KeyError: 'expected_field'
89
+ TypeError: cannot unpack
90
+ JSONDecodeError
91
+ ```
92
+
93
+ **Diagnosis**: Response structure changed
94
+
95
+ **Fix**: Update parsing logic for new format
96
+
97
+ #### Network/Timeout
98
+ ```
99
+ ConnectionError
100
+ Timeout
101
+ DNS resolution failed
102
+ ```
103
+
104
+ **Diagnosis**: Network issues or long-running requests
105
+
106
+ **Fix**: Add retry logic, increase timeouts
107
+
108
+ ### Step 4: Test Locally
109
+
110
+ Run the connector locally to reproduce and fix the issue:
111
+
112
+ 1. **Set up environment**:
113
+ ```bash
114
+ # Create virtual environment
115
+ python -m venv venv
116
+ source venv/bin/activate
117
+ pip install dlt requests
118
+ ```
119
+
120
+ 2. **Set credentials**:
121
+ ```bash
122
+ export STRIPE_API_KEY="sk_test_..."
123
+ # or use .dlt/secrets.toml
124
+ ```
125
+
126
+ 3. **Run with debugging**:
127
+ ```python
128
+ import dlt
129
+ import logging
130
+
131
+ # Enable detailed logging
132
+ logging.basicConfig(level=logging.DEBUG)
133
+
134
+ # Run pipeline with small dataset
135
+ pipeline = dlt.pipeline(
136
+ pipeline_name="debug_stripe",
137
+ destination="duckdb", # Local for testing
138
+ dataset_name="test"
139
+ )
140
+
141
+ # Test specific resource
142
+ source = stripe_source()
143
+ load_info = pipeline.run(source.with_resources("customers").add_limit(10))
144
+ print(load_info)
145
+ ```
146
+
147
+ 4. **Iterate on fixes**:
148
+ - Make changes to the code
149
+ - Test with small data samples
150
+ - Verify data loads correctly
151
+
152
+ ### Step 5: Apply Fixes
153
+
154
+ Common fix patterns:
155
+
156
+ #### Add Retry Logic
157
+ ```python
158
+ from tenacity import retry, stop_after_attempt, wait_exponential
159
+
160
+ @retry(
161
+ stop=stop_after_attempt(3),
162
+ wait=wait_exponential(multiplier=1, min=4, max=60)
163
+ )
164
+ def fetch_data(url, headers):
165
+ response = requests.get(url, headers=headers)
166
+ response.raise_for_status()
167
+ return response.json()
168
+ ```
169
+
170
+ #### Handle Rate Limiting
171
+ ```python
172
+ import time
173
+
174
+ def rate_limited_fetch(url, headers, calls_per_minute=60):
175
+ response = requests.get(url, headers=headers)
176
+
177
+ if response.status_code == 429:
178
+ retry_after = int(response.headers.get('Retry-After', 60))
179
+ time.sleep(retry_after)
180
+ return rate_limited_fetch(url, headers)
181
+
182
+ return response.json()
183
+ ```
184
+
185
+ #### Update Field Mappings
186
+ ```python
187
+ # Old
188
+ data['old_field_name']
189
+
190
+ # New - with fallback for backwards compatibility
191
+ data.get('new_field_name', data.get('old_field_name'))
192
+ ```
193
+
194
+ #### Handle Missing Data
195
+ ```python
196
+ @dlt.resource
197
+ def customers():
198
+ for customer in fetch_customers():
199
+ yield {
200
+ 'id': customer['id'],
201
+ 'name': customer.get('name', ''), # Handle missing
202
+ 'email': customer.get('email'), # Allow null
203
+ 'created_at': customer.get('created_at', datetime.now().isoformat())
204
+ }
205
+ ```
206
+
207
+ #### Update API Version
208
+ ```python
209
+ # Old
210
+ base_url = "https://api.example.com/v1"
211
+
212
+ # New
213
+ base_url = "https://api.example.com/v2"
214
+
215
+ # Update endpoints
216
+ config = {
217
+ "resources": [
218
+ {
219
+ "name": "customers",
220
+ "endpoint": {
221
+ "path": "customers", # was: "v1/customers"
222
+ "params": {"version": "2024-01"}
223
+ }
224
+ }
225
+ ]
226
+ }
227
+ ```
228
+
229
+ ### Step 6: Validate Fix
230
+
231
+ Before submitting, validate thoroughly:
232
+
233
+ 1. **Run full extraction** (not just sample):
234
+ ```python
235
+ load_info = pipeline.run(source)
236
+ print(f"Loaded {load_info.load_packages[0].jobs} jobs")
237
+ ```
238
+
239
+ 2. **Verify data quality**:
240
+ ```sql
241
+ -- Check row counts
242
+ SELECT COUNT(*) FROM test.customers;
243
+
244
+ -- Check for nulls in required fields
245
+ SELECT COUNT(*) FROM test.customers WHERE id IS NULL;
246
+
247
+ -- Verify date ranges
248
+ SELECT MIN(created_at), MAX(created_at) FROM test.customers;
249
+ ```
250
+
251
+ 3. **Compare with previous data**:
252
+ ```sql
253
+ -- Ensure no data loss
254
+ SELECT
255
+ 'before' as source, COUNT(*) FROM production.customers
256
+ UNION ALL
257
+ SELECT
258
+ 'after' as source, COUNT(*) FROM test.customers;
259
+ ```
260
+
261
+ ### Step 7: Submit Healed Code
262
+
263
+ Once validated, submit the fix:
264
+
265
+ ```
266
+ quarri_update_connector_code({
267
+ connector_id: "stripe_connector_123",
268
+ pipeline_code: "[full fixed Python code]",
269
+ change_summary: "Fixed rate limiting by adding exponential backoff"
270
+ })
271
+ ```
272
+
273
+ ## Error Pattern Reference
274
+
275
+ ### Authentication
276
+ | Error | Cause | Fix |
277
+ |-------|-------|-----|
278
+ | 401 Unauthorized | Invalid/expired credentials | Update API key |
279
+ | 403 Forbidden | Insufficient permissions | Check API scopes |
280
+ | OAuth token expired | Token TTL exceeded | Implement refresh flow |
281
+
282
+ ### Rate Limiting
283
+ | Error | Cause | Fix |
284
+ |-------|-------|-----|
285
+ | 429 Too Many Requests | Exceeded rate limit | Add backoff/throttling |
286
+ | Quota exceeded | Daily/monthly limit hit | Batch requests, spread over time |
287
+
288
+ ### Data Format
289
+ | Error | Cause | Fix |
290
+ |-------|-------|-----|
291
+ | KeyError | Missing field | Use .get() with default |
292
+ | TypeError | Wrong data type | Add type conversion |
293
+ | JSONDecodeError | Invalid JSON response | Handle non-JSON responses |
294
+
295
+ ### Network
296
+ | Error | Cause | Fix |
297
+ |-------|-------|-----|
298
+ | ConnectionError | Network failure | Add retry logic |
299
+ | Timeout | Request too slow | Increase timeout, paginate |
300
+ | DNS error | Resolution failure | Check URL, add retry |
301
+
302
+ ## Output Format
303
+
304
+ ```markdown
305
+ ## Connector Debug Report: [Connector Name]
306
+
307
+ ### Error Summary
308
+ - **Status**: [Failing/Fixed]
309
+ - **Last Success**: [Timestamp]
310
+ - **Error Type**: [Category]
311
+ - **Error Message**: [Actual error]
312
+
313
+ ### Root Cause Analysis
314
+ [Explanation of why the connector is failing]
315
+
316
+ ### Fix Applied
317
+ ```python
318
+ [Code changes - before/after]
319
+ ```
320
+
321
+ ### Validation Results
322
+ - Test run: [Success/Failure]
323
+ - Records loaded: [Count]
324
+ - Data quality: [Pass/Fail with details]
325
+
326
+ ### Next Steps
327
+ - [ ] Submit healed code
328
+ - [ ] Monitor next scheduled run
329
+ - [ ] [Any additional actions]
330
+ ```
331
+
332
+ ## Best Practices
333
+
334
+ 1. **Always test locally first** - Don't submit untested fixes
335
+ 2. **Keep change logs** - Document what changed and why
336
+ 3. **Preserve backwards compatibility** - Handle old and new formats when possible
337
+ 4. **Add defensive coding** - Handle missing fields, rate limits, retries
338
+ 5. **Monitor after fix** - Verify the next scheduled run succeeds
@@ -0,0 +1,372 @@
1
+ ---
2
+ description: Root cause analysis using metric trees to diagnose KPI changes
3
+ globs:
4
+ alwaysApply: false
5
+ ---
6
+
7
+ # /quarri-diagnose - Root Cause Analysis
8
+
9
+ Perform systematic root cause analysis using metric trees to diagnose why KPIs changed.
10
+
11
+ ## When to Use
12
+
13
+ Use `/quarri-diagnose` when users ask diagnostic questions:
14
+ - "Why did revenue drop last month?"
15
+ - "What's causing churn to increase?"
16
+ - "Why is conversion rate declining?"
17
+ - "What's driving the growth in customer acquisition?"
18
+
19
+ This skill is different from `/quarri-analyze`:
20
+ - `/quarri-analyze`: General analysis with statistics and insights
21
+ - `/quarri-diagnose`: Focused root cause investigation using metric decomposition
22
+
23
+ ## Diagnostic Workflow
24
+
25
+ ```
26
+ 1. Identify the metric of concern
27
+
28
+ 2. Build/retrieve metric tree (decompose KPI)
29
+
30
+ 3. Query each component for current vs previous period
31
+
32
+ 4. Calculate period-over-period changes
33
+
34
+ 5. Identify component with largest negative impact
35
+
36
+ 6. Drill down recursively if needed
37
+
38
+ 7. Generate root cause hypothesis with evidence
39
+
40
+ 8. Recommend actions to address root cause
41
+ ```
42
+
43
+ ## Step 1: Identify the Metric
44
+
45
+ Parse the user's question to determine:
46
+ - **Target metric**: What KPI are they concerned about?
47
+ - **Direction**: Is it a drop, increase, or unexpected behavior?
48
+ - **Time period**: When did this happen? What's the comparison period?
49
+
50
+ **Examples:**
51
+ - "Why did revenue drop?" → Metric: Revenue, Direction: Decrease
52
+ - "What's causing churn to increase?" → Metric: Churn Rate, Direction: Increase
53
+ - "Conversion tanked last week" → Metric: Conversion Rate, Direction: Decrease, Period: Last week
54
+
55
+ ## Step 2: Build the Metric Tree
56
+
57
+ Either retrieve an existing metric tree or build one dynamically:
58
+
59
+ ### Revenue Tree
60
+ ```
61
+ Revenue = Customers × Orders/Customer × Revenue/Order
62
+
63
+ ├── Customers
64
+ │ ├── New Customers
65
+ │ └── Returning Customers
66
+
67
+ ├── Orders per Customer
68
+ │ └── Purchase frequency
69
+
70
+ └── Revenue per Order
71
+ ├── Units per order
72
+ └── Price per unit
73
+ ```
74
+
75
+ ### Conversion Rate Tree
76
+ ```
77
+ Conversion = Conversions / Visitors
78
+
79
+ ├── Visitors
80
+ │ ├── Organic traffic
81
+ │ ├── Paid traffic
82
+ │ └── Direct traffic
83
+
84
+ └── Conversions (by funnel stage)
85
+ ├── View → Add to Cart
86
+ ├── Cart → Checkout
87
+ └── Checkout → Purchase
88
+ ```
89
+
90
+ ### Churn Tree
91
+ ```
92
+ Churn Rate = Churned Customers / Total Customers
93
+
94
+ ├── Churned Customers
95
+ │ ├── By tenure (new vs established)
96
+ │ ├── By segment (enterprise vs SMB)
97
+ │ └── By product usage
98
+
99
+ └── Total Customers
100
+ └── (Denominator context)
101
+ ```
102
+
103
+ ## Step 3: Query Components
104
+
105
+ Generate SQL to calculate each component for current and previous periods:
106
+
107
+ ```sql
108
+ -- Root cause analysis: Revenue components
109
+ WITH current_period AS (
110
+ SELECT
111
+ COUNT(DISTINCT customer_id) as customers,
112
+ COUNT(DISTINCT CASE WHEN is_new_customer THEN customer_id END) as new_customers,
113
+ COUNT(DISTINCT CASE WHEN NOT is_new_customer THEN customer_id END) as returning_customers,
114
+ COUNT(*) as orders,
115
+ COUNT(*)::float / NULLIF(COUNT(DISTINCT customer_id), 0) as orders_per_customer,
116
+ SUM(revenue) as revenue,
117
+ SUM(revenue)::float / NULLIF(COUNT(*), 0) as revenue_per_order,
118
+ SUM(units)::float / NULLIF(COUNT(*), 0) as units_per_order,
119
+ SUM(revenue)::float / NULLIF(SUM(units), 0) as price_per_unit
120
+ FROM quarri.bridge
121
+ WHERE order_date >= CURRENT_DATE - INTERVAL '30 days'
122
+ ),
123
+ previous_period AS (
124
+ SELECT
125
+ COUNT(DISTINCT customer_id) as customers,
126
+ COUNT(DISTINCT CASE WHEN is_new_customer THEN customer_id END) as new_customers,
127
+ COUNT(DISTINCT CASE WHEN NOT is_new_customer THEN customer_id END) as returning_customers,
128
+ COUNT(*) as orders,
129
+ COUNT(*)::float / NULLIF(COUNT(DISTINCT customer_id), 0) as orders_per_customer,
130
+ SUM(revenue) as revenue,
131
+ SUM(revenue)::float / NULLIF(COUNT(*), 0) as revenue_per_order,
132
+ SUM(units)::float / NULLIF(COUNT(*), 0) as units_per_order,
133
+ SUM(revenue)::float / NULLIF(SUM(units), 0) as price_per_unit
134
+ FROM quarri.bridge
135
+ WHERE order_date >= CURRENT_DATE - INTERVAL '60 days'
136
+ AND order_date < CURRENT_DATE - INTERVAL '30 days'
137
+ )
138
+ SELECT
139
+ metric,
140
+ previous_value,
141
+ current_value,
142
+ change_pct,
143
+ impact_pct
144
+ FROM (
145
+ SELECT 'customers' as metric, p.customers as previous_value, c.customers as current_value,
146
+ (c.customers - p.customers)::float / NULLIF(p.customers, 0) * 100 as change_pct,
147
+ ((c.customers - p.customers) * p.orders_per_customer * p.revenue_per_order)::float / NULLIF(p.revenue, 0) * 100 as impact_pct
148
+ FROM current_period c, previous_period p
149
+ UNION ALL
150
+ -- Continue for all components...
151
+ ) metrics
152
+ ORDER BY ABS(impact_pct) DESC;
153
+ ```
154
+
155
+ ## Step 4: Calculate Impact Attribution
156
+
157
+ For each component, calculate its contribution to the overall change:
158
+
159
+ ### Multiplicative Decomposition
160
+ For `Revenue = A × B × C`:
161
+
162
+ ```
163
+ Total Change = Revenue_current - Revenue_previous
164
+
165
+ Impact of A = (A_curr - A_prev) × B_prev × C_prev
166
+ Impact of B = A_curr × (B_curr - B_prev) × C_prev
167
+ Impact of C = A_curr × B_curr × (C_curr - C_prev)
168
+
169
+ (Sum of impacts ≈ Total Change)
170
+ ```
171
+
172
+ ### Additive Decomposition
173
+ For `Revenue = A + B + C`:
174
+
175
+ ```
176
+ Total Change = Revenue_current - Revenue_previous
177
+
178
+ Impact of A = A_curr - A_prev
179
+ Impact of B = B_curr - B_prev
180
+ Impact of C = C_curr - C_prev
181
+
182
+ (Sum of impacts = Total Change exactly)
183
+ ```
184
+
185
+ ## Step 5: Identify Primary Driver
186
+
187
+ Rank components by their impact on the overall metric:
188
+
189
+ ```
190
+ Revenue dropped 10% ($100K → $90K = -$10K)
191
+
192
+ Impact Attribution:
193
+ ┌─────────────────────┬──────────┬─────────┬──────────┬──────────────┐
194
+ │ Component │ Previous │ Current │ Change % │ Impact $ │
195
+ ├─────────────────────┼──────────┼─────────┼──────────┼──────────────┤
196
+ │ Customers │ 1,000 │ 920 │ -8% │ -$8,000 ◀── │
197
+ │ Orders/Customer │ 2.5 │ 2.45 │ -2% │ -$1,800 │
198
+ │ Revenue/Order │ $40 │ $39.90 │ -0.25% │ -$200 │
199
+ └─────────────────────┴──────────┴─────────┴──────────┴──────────────┘
200
+
201
+ PRIMARY DRIVER: Customer count (-8%, -$8K of -$10K total)
202
+ ```
203
+
204
+ ## Step 6: Drill Down
205
+
206
+ If the primary driver has sub-components, recurse:
207
+
208
+ ```
209
+ Customer Count dropped 8%
210
+
211
+ Sub-component Analysis:
212
+ ┌─────────────────────┬──────────┬─────────┬──────────┐
213
+ │ Component │ Previous │ Current │ Change % │
214
+ ├─────────────────────┼──────────┼─────────┼──────────┤
215
+ │ New Customers │ 300 │ 200 │ -33% ◀──│
216
+ │ Returning Customers │ 700 │ 720 │ +3% │
217
+ └─────────────────────┴──────────┴─────────┴──────────┘
218
+
219
+ PRIMARY DRIVER: New customer acquisition (-33%)
220
+ ```
221
+
222
+ Continue drilling until reaching actionable root cause:
223
+
224
+ ```
225
+ New Customer Acquisition dropped 33%
226
+
227
+ Sub-component Analysis (by channel):
228
+ ┌─────────────────────┬──────────┬─────────┬──────────┐
229
+ │ Channel │ Previous │ Current │ Change % │
230
+ ├─────────────────────┼──────────┼─────────┼──────────┤
231
+ │ Paid Search │ 150 │ 80 │ -47% ◀──│
232
+ │ Paid Social │ 80 │ 60 │ -25% │
233
+ │ Organic │ 70 │ 60 │ -14% │
234
+ └─────────────────────┴──────────┴─────────┴──────────┘
235
+
236
+ ROOT CAUSE IDENTIFIED: Paid search acquisition dropped 47%
237
+ ```
238
+
239
+ ## Step 7: Generate Hypothesis
240
+
241
+ Based on the analysis, generate actionable root cause hypothesis:
242
+
243
+ ```markdown
244
+ ## Root Cause Analysis: Revenue Decline
245
+
246
+ ### Summary
247
+ Revenue dropped 10% ($100K → $90K) in the last 30 days.
248
+
249
+ ### Root Cause Chain
250
+ ```
251
+ Revenue ↓10%
252
+ └── Customer Count ↓8% (80% of impact)
253
+ └── New Customers ↓33%
254
+ └── Paid Search ↓47% ← ROOT CAUSE
255
+ ```
256
+
257
+ ### Evidence
258
+ - Paid search was the largest acquisition channel (50% of new customers)
259
+ - Paid search cost per acquisition increased 35%
260
+ - Conversion rate from paid search stable (not a landing page issue)
261
+
262
+ ### Confidence Level
263
+ **High** - Clear attribution path with consistent data
264
+
265
+ ### Hypothesis
266
+ Paid search performance degraded due to increased competition or
267
+ changed bid strategy. The drop in paid search volume directly
268
+ explains the majority of revenue decline.
269
+ ```
270
+
271
+ ## Step 8: Recommend Actions
272
+
273
+ Provide actionable recommendations:
274
+
275
+ ```markdown
276
+ ### Recommended Actions
277
+
278
+ **Immediate (This Week)**
279
+ 1. Review paid search campaign performance in Google Ads
280
+ 2. Check for recent bid strategy or budget changes
281
+ 3. Analyze competitor activity in auction insights
282
+
283
+ **Short-term (This Month)**
284
+ 1. Optimize underperforming ad groups
285
+ 2. Test new ad copy and landing pages
286
+ 3. Consider increasing budget if ROAS is still profitable
287
+
288
+ **Investigation Needed**
289
+ 1. Did CPC increase? (external market pressure)
290
+ 2. Did quality score drop? (internal issue)
291
+ 3. Were there any campaign changes around the decline date?
292
+ ```
293
+
294
+ ## Output Format
295
+
296
+ ```markdown
297
+ ## Diagnosis: [Metric] [Direction] [Magnitude]
298
+
299
+ ### Metric Tree
300
+ ```
301
+ [Top-level metric]
302
+ ├── [Component 1] [change] ← [marker if primary]
303
+ ├── [Component 2] [change]
304
+ └── [Component 3] [change]
305
+ ```
306
+
307
+ ### Impact Attribution
308
+ | Component | Previous | Current | Change % | Impact |
309
+ |-----------|----------|---------|----------|--------|
310
+ | ... | ... | ... | ... | ... |
311
+
312
+ ### Root Cause Chain
313
+ ```
314
+ [Top metric] [change]
315
+ └── [Driver 1] [change] (X% of impact)
316
+ └── [Driver 2] [change]
317
+ └── [ROOT CAUSE] [change]
318
+ ```
319
+
320
+ ### Confidence Level
321
+ [High/Medium/Low] - [Reasoning]
322
+
323
+ ### Evidence
324
+ - [Supporting data point 1]
325
+ - [Supporting data point 2]
326
+ - [Supporting data point 3]
327
+
328
+ ### Hypothesis
329
+ [Clear statement of what caused the change]
330
+
331
+ ### Recommended Actions
332
+
333
+ **Immediate**
334
+ 1. [Action 1]
335
+ 2. [Action 2]
336
+
337
+ **Short-term**
338
+ 1. [Action 1]
339
+ 2. [Action 2]
340
+
341
+ **Investigation Needed**
342
+ 1. [Question to answer]
343
+ 2. [Data to gather]
344
+ ```
345
+
346
+ ## Confidence Levels
347
+
348
+ ### High Confidence
349
+ - Clear attribution path (single dominant driver)
350
+ - Consistent data across dimensions
351
+ - Change magnitude is significant (>20%)
352
+ - Root cause is specific and actionable
353
+
354
+ ### Medium Confidence
355
+ - Multiple contributing drivers
356
+ - Some data inconsistencies
357
+ - Need additional context for certainty
358
+ - Root cause is somewhat general
359
+
360
+ ### Low Confidence
361
+ - No clear dominant driver
362
+ - Data quality issues
363
+ - Multiple possible explanations
364
+ - Need more investigation
365
+
366
+ ## Integration
367
+
368
+ This skill works well with:
369
+ - `/quarri-metric`: Use existing metric definitions and trees
370
+ - `/quarri-query`: Get additional data to validate hypotheses
371
+ - `/quarri-analyze`: Follow-up with detailed segment analysis
372
+ - `/quarri-chart`: Visualize the change over time