@cronicorn/mcp-server 1.6.1 → 1.7.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,208 @@
1
+ ---
2
+ id: use-cases
3
+ title: Use Cases & Examples
4
+ description: Real-world scenarios showing how Cronicorn adapts to your application needs
5
+ tags:
6
+ - user
7
+ - examples
8
+ sidebar_position: 3
9
+ mcp:
10
+ uri: file:///docs/use-cases.md
11
+ mimeType: text/markdown
12
+ priority: 0.8
13
+ ---
14
+
15
+ # Use Cases & Examples
16
+
17
+ Cronicorn's adaptive scheduling shines in scenarios where static cron jobs fall short. Here are real-world examples across different industries.
18
+
19
+ ## E-Commerce: Flash Sale Monitoring
20
+
21
+ You're running a Black Friday sale and traffic spikes 5× normal volume. Your checkout system needs close monitoring, but you don't want to manually adjust schedules every hour.
22
+
23
+ **Setup:**
24
+
25
+ | Endpoint | Baseline | Purpose |
26
+ |----------|----------|---------|
27
+ | Traffic Monitor | Every 5 min | Track visitors and page load times |
28
+ | Order Processor Health | Every 3 min | Monitor checkout performance |
29
+ | Inventory Sync Check | Every 10 min | Ensure stock accuracy |
30
+
31
+ **What Happens:**
32
+
33
+ - **9am (sale starts)**: Traffic surges detected. AI tightens monitoring to every 30 seconds for the next hour.
34
+ - **10am (struggling)**: Slow pages detected. AI activates cache warm-up (one-shot) and increases monitoring.
35
+ - **11am (recovering)**: Performance stabilizes. AI gradually relaxes back to baseline over 30 minutes.
36
+ - **2pm (stable)**: All endpoints return to normal 3-10 minute intervals.
37
+
38
+ **Result**: Your team gets real-time visibility during the surge without manual schedule changes. The system backs off automatically when things stabilize.
39
+
40
+
41
+ ## Marketing: Content Publishing & Engagement Tracking
42
+
43
+ You publish blog posts and want to monitor engagement closely for the first 24 hours, then back off to daily checks.
44
+
45
+ **Setup:**
46
+
47
+ | Endpoint | Baseline | Behavior |
48
+ |----------|----------|----------|
49
+ | Publish Post | Cron: "0 9 * * MON" | Publishes every Monday at 9am |
50
+ | Track Analytics | Every 30 min | Page views, time on page |
51
+ | Check Email Opens | Every hour | Newsletter engagement |
52
+ | Promote Top Posts | Paused | Activates when engagement crosses threshold |
53
+
54
+ **What Happens:**
55
+
56
+ - **Monday 9am**: Post publishes on schedule
57
+ - **9am-6pm**: AI tightens analytics tracking to every 5 minutes (fresh content gets attention)
58
+ - **Tuesday-Friday**: AI relaxes to hourly checks (engagement is stable)
59
+ - **Friday**: High engagement detected, AI activates promotional boost endpoint
60
+ - **Next Monday**: Analytics reset to baseline 30 min intervals
61
+
62
+ **Result**: You get real-time engagement data when it matters most, without paying for excessive API calls once traffic stabilizes.
63
+
64
+ ## DevOps: Infrastructure Health & Auto-Remediation
65
+
66
+ Your SaaS platform runs background monitoring that should attempt automatic fixes before paging engineers at 2am.
67
+
68
+ **Setup:**
69
+
70
+ - **Health checks** run continuously: API latency, error rates, database connections, queue depth
71
+ - **Investigation endpoints** start paused: slow query logs, memory profilers, distributed traces
72
+ - **Remediation actions** fire as one-shots: restart unhealthy pods, flush caches, scale workers
73
+ - **Alerts** escalate intelligently: Slack ops → Slack support → PagerDuty on-call
74
+
75
+ **What Happens:**
76
+
77
+ 1. Database latency increases above threshold
78
+ 2. AI activates slow query investigation (was paused)
79
+ 3. Investigation identifies problematic query
80
+ 4. System fires one-shot remediation to kill long-running queries
81
+ 5. If fixed: Alert sent to Slack, investigation pauses again
82
+ 6. If not fixed after 3 attempts: Page on-call engineer
83
+
84
+ **Result**: Most issues self-heal without waking anyone up. Engineers only get paged when automation can't resolve the problem.
85
+
86
+ ## Data Engineering: ETL Pipeline Coordination
87
+
88
+ You run nightly data syncs from Salesforce, transform the data, and load it into your warehouse. The transformation step should only run after extraction succeeds.
89
+
90
+ **Setup:**
91
+
92
+ | Endpoint | Schedule | Notes |
93
+ |----------|----------|-------|
94
+ | Extract from Salesforce | Cron: "0 2 * * *" | Daily at 2am |
95
+ | Transform Records | Paused | Activates only after extraction |
96
+ | Load to Warehouse | Paused | Activates after transformation |
97
+ | Check Pipeline Lag | Every 2 min | Monitors data freshness |
98
+
99
+ **What Happens:**
100
+
101
+ - **2:00am**: Salesforce extraction starts
102
+ - **2:15am**: Extraction completes successfully. AI activates transformation endpoint for immediate one-shot run.
103
+ - **2:20am**: Transformation finishes. AI activates warehouse load for one-shot run.
104
+ - **2:25am**: Full pipeline complete. Investigation endpoints pause until tomorrow.
105
+ - **If extraction fails**: Transformation and loading stay paused. Alert fires.
106
+
107
+ **Result**: You coordinate complex dependencies without writing custom orchestration code. Each step waits for upstream success.
108
+
109
+ ## SaaS Operations: Usage Monitoring & Billing
110
+
111
+ You need to track customer API usage hourly, send quota warnings at specific thresholds, and generate invoices monthly.
112
+
113
+ **Setup:**
114
+
115
+ - **Usage tracking**: Aggregate API calls every hour
116
+ - **Quota monitoring**: Check for overages every 10 minutes
117
+ - **Warning emails**: One-shot sends at 80%, 90%, 100% of quota
118
+ - **Invoice generation**: Cron runs on 1st of each month
119
+ - **Payment reminders**: Escalating sequence (3 days → 7 days → 14 days → 30 days overdue)
120
+
121
+ **What Happens:**
122
+
123
+ - **Throughout month**: Usage tracked hourly, quota checked every 10 min
124
+ - **Customer approaching limit**: AI fires one-shot warning emails at each threshold
125
+ - **1st of month at midnight**: Invoices generated for all accounts
126
+ - **Payment not received**: Reminder sequence starts automatically (paused between emails)
127
+ - **Payment received**: AI pauses further reminders
128
+
129
+ **Result**: Billing runs like clockwork with minimal manual intervention. Customers get timely warnings, and dunning sequences are fully automated.
130
+
131
+ ## Web Scraping: Competitive Price Monitoring
132
+
133
+ You track competitor pricing across multiple sites, but need to respect their rate limits and adapt when they change.
134
+
135
+ **Setup:**
136
+
137
+ | Endpoint | Baseline | Constraints |
138
+ |----------|----------|-------------|
139
+ | Scrape Prices | Every 4 hours | Min: 30 min, Max: 12 hours |
140
+ | Validate Data | After each scrape | Checks for blocks/CAPTCHAs |
141
+ | Check Proxy Health | Every 5 min | Monitors rotating proxies |
142
+ | Alert Price Drop | Paused | Fires when competitor drops price |
143
+
144
+ **What Happens:**
145
+
146
+ - **Normal operation**: Scraping every 4 hours
147
+ - **Rate limit warning detected**: AI backs off to every 6 hours for next day
148
+ - **Competitor sale detected**: AI tightens to every 30 minutes (respects min interval)
149
+ - **CAPTCHA encountered**: Validation fails, AI pauses scraping for 2 hours
150
+ - **Proxy recovery**: Health check passes, AI resumes scraping
151
+
152
+ **Result**: You adapt to site changes automatically. Scrapers respect rate limits, pause when blocked, and intensify during competitor sales.
153
+
154
+ ## Common Patterns
155
+
156
+ ### Adaptive Monitoring Frequency
157
+
158
+ Start with a baseline, tighten when issues detected, relax when stable:
159
+
160
+ ```
161
+ Baseline: Every 5 minutes
162
+ During issues: Every 30 seconds (respects min interval)
163
+ When stable: Every 15 minutes (respects max interval)
164
+ ```
165
+
166
+ ### Conditional Endpoint Activation
167
+
168
+ Keep expensive operations paused until needed:
169
+
170
+ ```
171
+ Investigation endpoints: Paused by default
172
+ Activate: When health checks fail
173
+ Deactivate: When issues resolve or timeout expires
174
+ ```
175
+
176
+ ### One-Shot Actions with Cooldowns
177
+
178
+ Fire immediate actions that shouldn't repeat too quickly:
179
+
180
+ ```
181
+ Cache warm-up: Fire once now, don't repeat for 15 minutes
182
+ Alert escalation: Send once, pause for 1 hour before next
183
+ Scale workers: Trigger immediately, cooldown for 20 minutes
184
+ ```
185
+
186
+ ### Calendar-Based + Adaptive Hybrid
187
+
188
+ Use cron for time-sensitive tasks, intervals for everything else:
189
+
190
+ ```
191
+ Daily reports: Cron "0 9 * * *" (always 9am)
192
+ Monitoring: Interval 300000ms (adapts between min/max)
193
+ Monthly billing: Cron "0 0 1 * *" (1st of month)
194
+ ```
195
+
196
+ ## When to Use Cronicorn
197
+
198
+ **Great fit:**
199
+ - You adjust cron schedules manually based on metrics
200
+ - You need backoff logic after failures
201
+ - You want tight monitoring during critical periods
202
+ - You coordinate multiple dependent endpoints
203
+ - You respect rate limits that vary by usage
204
+
205
+ **Probably overkill:**
206
+ - Simple daily backup at 2am (basic cron is fine)
207
+ - Completely static schedules that never change
208
+ - Sub-second precision required (use a real-time system)