@cronicorn/mcp-server 1.4.5 → 1.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,306 @@
1
+ ---
2
+ id: system-architecture
3
+ title: System Architecture
4
+ description: Dual-worker architecture with Scheduler and AI Planner
5
+ tags: [assistant, technical, architecture]
6
+ sidebar_position: 1
7
+ mcp:
8
+ uri: file:///docs/technical/system-architecture.md
9
+ mimeType: text/markdown
10
+ priority: 0.85
11
+ lastModified: 2025-11-02T00:00:00Z
12
+ ---
13
+
14
+ # System Architecture
15
+
16
+ **TL;DR:** Cronicorn uses two independent workers (Scheduler and AI Planner) that communicate only through a shared database. The Scheduler executes jobs reliably on schedule, while the AI Planner analyzes execution patterns and suggests schedule adjustments through time-bounded hints. This separation makes the system both reliable and adaptive.
17
+
18
+ ---
19
+
20
+ ## The Big Picture
21
+
22
+ Cronicorn uses a **dual-worker architecture** where job scheduling is both reliable and intelligent. Instead of one worker doing everything, we split responsibilities into two independent workers:
23
+
24
+ 1. **The Scheduler Worker** - Executes jobs on schedule
25
+ 2. **The AI Planner Worker** - Analyzes patterns and suggests schedule adjustments
26
+
27
+ These workers never communicate directly. All coordination happens through the database. This separation is why the system works.
28
+
29
+ ## Why Two Workers?
30
+
31
+ When you build intelligence directly into a scheduler, every job execution must:
32
+
33
+ - Analyze execution history
34
+ - Make API calls to AI models
35
+ - Wait for responses
36
+ - Process recommendations
37
+ - Handle AI failures gracefully
38
+
39
+ All of this happens *in the critical path*. If the AI is slow, jobs run late. If the AI crashes, the scheduler might crash. If you update AI logic, you risk breaking job execution.
40
+
41
+ Cronicorn separates execution from decision-making. The Scheduler executes endpoints reliably and on time. The AI Planner analyzes patterns and suggests adjustments. Neither worker depends on the other.
42
+
43
+ This separation provides:
44
+
45
+ - **Reliability**: The Scheduler keeps running even if AI fails
46
+ - **Performance**: Jobs execute immediately without waiting for AI analysis
47
+ - **Safety**: Bugs in AI logic can't break job execution
48
+ - **Flexibility**: We can upgrade, restart, or replace either worker independently
49
+ - **Scalability**: We can scale Schedulers and AI Planners separately based on load
50
+
51
+ ## How Workers Communicate: The Database as Message Bus
52
+
53
+ Since the workers don't talk to each other, they coordinate through **shared database state**. The database is both storage and message bus—workers leave information for each other.
54
+
55
+ Here's how it works:
56
+
57
+ ### The Scheduler's Perspective
58
+
59
+ The Scheduler wakes up every 5 seconds and asks the database: "Which endpoints are due to run right now?" It gets back a list of endpoint IDs, executes each one, and then writes the results back to the database:
60
+
61
+ - Execution status (success or failure)
62
+ - Duration and performance metrics
63
+ - Response body data (the actual JSON returned by the endpoint)
64
+ - Updated failure counts
65
+
66
+ After recording results, the Scheduler calculates when each endpoint should run next and updates the `nextRunAt` field. Then it goes back to sleep for 5 seconds.
67
+
68
+ Notice what the Scheduler *doesn't* do: analyze patterns, make AI decisions, or try to be clever. It executes, records, schedules next run, repeat.
69
+
70
+ ### The AI Planner's Perspective
71
+
72
+ The AI Planner wakes up every 5 minutes (or on a different schedule entirely) and asks the database: "Which endpoints have been active recently?" It gets back a list of endpoint IDs that ran in the last few minutes.
73
+
74
+ For each active endpoint, the Planner reads execution history from the database:
75
+ - Success rates over the last 24 hours
76
+ - Recent response bodies
77
+ - Current failure streaks
78
+ - Existing schedule configuration
79
+
80
+ It sends all this context to an AI model and asks: "Should we adjust this endpoint's schedule?" The AI might decide:
81
+ - "Tighten the interval to 30 seconds because load is increasing" (writes an interval hint)
82
+ - "Run this immediately to investigate an issue" (writes a one-shot hint)
83
+ - "Pause until the maintenance window ends" (sets pausedUntil)
84
+ - "Everything looks good, no changes needed" (does nothing)
85
+
86
+ Any decisions get written to the database as **hints**—temporary scheduling suggestions with expiration times. Then the Planner moves to the next endpoint.
87
+
88
+ Notice what the AI Planner *doesn't* do: execute jobs, manage locks, or worry about reliability. It analyzes and suggests.
89
+
90
+ ### The Database as Coordination Medium
91
+
92
+ This database-mediated architecture means:
93
+
94
+ 1. **Eventually consistent**: The Scheduler might execute a job before the AI Planner has analyzed its previous run. That's fine—the next execution will use the AI's recommendations.
95
+
96
+ 2. **Non-blocking**: The Scheduler never waits for AI. It reads hints already in the database (or finds none) and makes instant decisions.
97
+
98
+ 3. **Fault-tolerant**: If the AI Planner crashes, the Scheduler keeps running jobs on baseline schedules. When AI comes back, it resumes making recommendations.
99
+
100
+ 4. **Scalable**: Want faster AI analysis? Run more AI Planner instances. Want to handle more job executions? Run more Scheduler instances. They scale independently.
101
+
102
+ ## The Three Types of Scheduling Information
103
+
104
+ Understanding how the system works requires understanding the three types of scheduling information stored in the database:
105
+
106
+ ### 1. Baseline Schedule (Permanent)
107
+
108
+ This is what you configure when creating an endpoint. It's either:
109
+ - A cron expression: `"0 */5 * * *"` (every 5 minutes)
110
+ - A fixed interval: `300000` milliseconds (5 minutes)
111
+
112
+ The baseline represents your *intent*—how often you want the job to run under normal circumstances. It never expires or changes unless you update it.
113
+
114
+ ### 2. AI Hints (Temporary, Time-Bounded)
115
+
116
+ These are recommendations from the AI Planner. They come in two flavors:
117
+
118
+ **Interval Hints**: "Run every 30 seconds for the next hour"
119
+ - Used when AI wants to change run frequency
120
+ - Has a TTL (time-to-live)—expires after N minutes
121
+ - Example: Tightening monitoring during a load spike
122
+
123
+ **One-Shot Hints**: "Run at 2:30 PM today"
124
+ - Used when AI wants to trigger a specific execution
125
+ - Has a TTL—expires if not used within N minutes
126
+ - Example: Immediate investigation of a failure
127
+
128
+ Both hint types have expiration times. When a hint expires, the system falls back to the baseline schedule. This is a safety mechanism—if the AI makes a bad decision, it's time-bounded.
129
+
130
+ ### 3. Pause State (Manual Override)
131
+
132
+ You can manually pause an endpoint until a specific time. While paused, the endpoint won't run at all, regardless of baseline or hints. This is useful for:
133
+ - Maintenance windows
134
+ - Temporarily disabling misbehaving endpoints
135
+ - Coordinating with external system downtime
136
+
137
+ Setting `pausedUntil = null` resumes the endpoint immediately.
138
+
139
+ ## How Adaptation Happens
140
+
141
+ Let's walk through a concrete example.
142
+
143
+ **Scenario**: You have a traffic monitoring endpoint checking visitor counts every 5 minutes (baseline interval).
144
+
145
+ **T=0**: Normal day, 2,000 visitors per minute
146
+ - Scheduler runs the endpoint on its 5-minute baseline
147
+ - Response body: `{ "visitorsPerMin": 2000, "status": "normal" }`
148
+ - Scheduler calculates next run at T+5min and updates database
149
+
150
+ **T+5min**: AI Planner analyzes the endpoint
151
+ - Reads last 24 hours of execution history
152
+ - Sees steady 2,000 visitors with 100% success rate
153
+ - AI decision: "Everything looks stable, no changes needed"
154
+ - No hints written to database
155
+
156
+ **T+10min**: Flash sale starts, traffic spikes
157
+ - Scheduler runs endpoint on 5-minute baseline
158
+ - Response body: `{ "visitorsPerMin": 5500, "status": "elevated" }`
159
+ - Scheduler records results and schedules next run at T+15min
160
+
161
+ **T+12min**: AI Planner analyzes again
162
+ - Sees visitor count jumped from 2,000 to 5,500
163
+ - Looks at trend over last few runs—increasing
164
+ - AI decision: "High load detected, need tighter monitoring"
165
+ - Writes interval hint to database: 30 seconds, expires in 60 minutes
166
+ - **Nudges** `nextRunAt` to T+12min+30sec
167
+
168
+ **T+12min+30sec**: Scheduler wakes up, claims endpoint (now due)
169
+ - Reads endpoint state from database
170
+ - Sees fresh AI hint (30-second interval, expires at T+72min)
171
+ - Governor chooses: AI hint (30 sec) overrides baseline (5 min)
172
+ - Executes endpoint, gets response: `{ "visitorsPerMin": 6200, "status": "high" }`
173
+ - Calculates next run: T+13min (30 seconds from now)
174
+
175
+ **T+13min through T+72min**: Runs every 30 seconds
176
+ - AI hint remains active
177
+ - Scheduler uses 30-second interval for every run
178
+ - System monitors flash sale closely
179
+
180
+ **T+72min**: AI hint expires
181
+ - Scheduler reads endpoint state
182
+ - No valid hints found (aiHintExpiresAt < now)
183
+ - Governor chooses: Baseline (5 min)
184
+ - System returns to normal 5-minute interval
185
+ - AI can propose new hints if load remains high
186
+
187
+ This example shows several key principles:
188
+
189
+ 1. **AI hints override baseline**—This is what makes the system adaptive
190
+ 2. **Hints have TTLs**—Bad AI decisions auto-correct
191
+ 3. **Nudging provides immediacy**—Changes take effect within seconds, not minutes
192
+ 4. **Eventual consistency works**—There's a delay between analysis and application, but it's acceptable
193
+ 5. **System self-heals**—When hints expire, it returns to known-good baseline
194
+
195
+ ## The Role of the Governor
196
+
197
+ Who decides when a job runs next? That's the **Governor**—a pure function inside the Scheduler worker.
198
+
199
+ After executing an endpoint, the Scheduler calls the Governor with:
200
+ - Current time
201
+ - Endpoint configuration (baseline, hints, constraints)
202
+ - Cron parser (for cron expressions)
203
+
204
+ The Governor evaluates all scheduling information and returns a single answer: "Run this endpoint next at [timestamp]."
205
+
206
+ The Governor is deterministic—same inputs always produce the same output. It has no side effects, makes no database calls, and contains no business logic beyond "what time should this run next?"
207
+
208
+ This determinism makes the Governor:
209
+ - **Testable**: We can verify scheduling logic with unit tests
210
+ - **Auditable**: Every scheduling decision has a clear source ("baseline-cron", "ai-interval", etc.)
211
+ - **Debuggable**: You can trace why a job ran when it did
212
+ - **Portable**: The algorithm can be understood, documented, and reimplemented
213
+
214
+ The Governor's logic is covered in detail in [How Scheduling Works](./how-scheduling-works.md).
215
+
216
+ ## Why This Architecture Works for Adaptation
217
+
218
+ Traditional cron systems are static—you set a schedule and it runs forever on that schedule. Cronicorn's architecture enables adaptive scheduling because:
219
+
220
+ 1. **Separation allows continuous learning**: While the Scheduler executes jobs, the AI Planner can analyze patterns without disrupting execution. Analysis happens in parallel, not blocking execution.
221
+
222
+ 2. **Hints enable safe experimentation**: Because hints have TTLs, the AI can try aggressive schedule changes knowing they'll auto-expire if wrong. This allows quick adaptation without risk.
223
+
224
+ 3. **Database state captures context**: Every execution records response bodies. The AI can see the data returned by endpoints—not just success/failure, but real metrics like queue depths, error rates, latency. This rich context enables intelligent decisions.
225
+
226
+ 4. **Override semantics enable tightening**: AI interval hints *override* baseline (not just compete), so the system can tighten monitoring during incidents. Without this override, the baseline would always win and adaptation would be limited to relaxation only.
227
+
228
+ 5. **Independent scaling supports different workloads**: Execution workload (Scheduler) and analysis workload (AI Planner) have different characteristics. Separating them allows optimizing each independently.
229
+
230
+ ## Data Flows: Putting It All Together
231
+
232
+ Here's how information flows through the system:
233
+
234
+ ```
235
+ [User Creates Endpoint]
236
+
237
+ Database (nextRunAt set based on baseline)
238
+
239
+ Scheduler claims endpoint
240
+
241
+ Scheduler executes HTTP request
242
+
243
+ Database (run record with response body)
244
+
245
+ AI Planner discovers active endpoint
246
+
247
+ AI Planner analyzes response data + history
248
+
249
+ Database (AI writes hints)
250
+
251
+ Scheduler claims endpoint again
252
+
253
+ Governor sees hints, calculates next run
254
+
255
+ Database (nextRunAt updated with hint influence)
256
+
257
+ [Cycle continues...]
258
+ ```
259
+
260
+ Notice how the database is the central hub. Workers don't communicate—they share state through database reads and writes. This is **database-mediated communication**, and it's the foundation of the architecture.
261
+
262
+ ## Trade-offs and Design Decisions
263
+
264
+ No architecture is perfect. Here are the trade-offs we made:
265
+
266
+ **✅ Pros:**
267
+ - Reliability (execution never blocked by AI)
268
+ - Performance (no AI in critical path)
269
+ - Scalability (independent worker scaling)
270
+ - Safety (AI bugs can't break execution)
271
+ - Testability (deterministic components)
272
+
273
+ **⚠️ Cons:**
274
+ - Eventual consistency (hints applied after next execution, not immediately)
275
+ - Database as bottleneck (all coordination through DB)
276
+ - More complex deployment (two worker types to run)
277
+ - Debugging requires understanding async flows
278
+
279
+ We believe the pros outweigh the cons for adaptive scheduling. The slight delay in applying AI hints (typically 5-30 seconds) is acceptable because scheduling adjustments aren't time-critical—we're optimizing for hours/days, not milliseconds.
280
+
281
+ ## What You Need to Know as a User
282
+
283
+ To use Cronicorn effectively, you need to understand:
284
+
285
+ 1. **Your baseline schedule is your safety net**: Even if AI does something unexpected, the system returns to your baseline when hints expire. Configure baselines conservatively.
286
+
287
+ 2. **Response bodies matter**: The AI analyzes the JSON you return. Structure it to include metrics the AI should monitor (queue depths, error rates, status flags).
288
+
289
+ 3. **Constraints are hard limits**: Min/max intervals and pause states override even AI hints. Use them to enforce invariants (rate limits, maintenance windows).
290
+
291
+ 4. **Coordination happens via response bodies**: To orchestrate multiple endpoints, have them write coordination signals to their response bodies. Other endpoints can read these via the `get_sibling_latest_responses` tool.
292
+
293
+ 5. **The system is eventually consistent**: Don't expect instant reactions to every change. The AI analyzes every 5 minutes, and hints apply on the next execution. Plan for minutes, not seconds.
294
+
295
+ ## Next Steps
296
+
297
+ Now that you understand the architecture, you can dive deeper:
298
+
299
+ - **[How Scheduling Works](./how-scheduling-works.md)** - Deep-dive into the Scheduler worker and Governor logic
300
+ - **[How AI Adaptation Works](./how-ai-adaptation-works.md)** - Deep-dive into the AI Planner and hint system
301
+ - **[Coordinating Multiple Endpoints](./coordinating-multiple-endpoints.md)** - Patterns for building workflows
302
+ - **[Configuration and Constraints](./configuration-and-constraints.md)** - How to configure endpoints effectively
303
+
304
+ ---
305
+
306
+ *This document explains how the system is designed. For implementation details, see the source code in `packages/worker-scheduler` and `packages/worker-ai-planner`.*
package/dist/index.js CHANGED
@@ -1113,8 +1113,8 @@ function getErrorMap() {
1113
1113
 
1114
1114
  // ../../node_modules/.pnpm/zod@3.25.76/node_modules/zod/v3/helpers/parseUtil.js
1115
1115
  var makeIssue = (params) => {
1116
- const { data, path: path2, errorMaps, issueData } = params;
1117
- const fullPath = [...path2, ...issueData.path || []];
1116
+ const { data, path: path3, errorMaps, issueData } = params;
1117
+ const fullPath = [...path3, ...issueData.path || []];
1118
1118
  const fullIssue = {
1119
1119
  ...issueData,
1120
1120
  path: fullPath
@@ -1230,11 +1230,11 @@ var errorUtil;
1230
1230
 
1231
1231
  // ../../node_modules/.pnpm/zod@3.25.76/node_modules/zod/v3/types.js
1232
1232
  var ParseInputLazyPath = class {
1233
- constructor(parent, value, path2, key) {
1233
+ constructor(parent, value, path3, key) {
1234
1234
  this._cachedPath = [];
1235
1235
  this.parent = parent;
1236
1236
  this.data = value;
1237
- this._path = path2;
1237
+ this._path = path3;
1238
1238
  this._key = key;
1239
1239
  }
1240
1240
  get path() {
@@ -4691,6 +4691,120 @@ function loadConfig() {
4691
4691
  return result.data;
4692
4692
  }
4693
4693
 
4694
+ // src/resources/index.ts
4695
+ import matter from "gray-matter";
4696
+ import fs7 from "fs/promises";
4697
+ import path2 from "path";
4698
+ import { fileURLToPath as fileURLToPath2 } from "url";
4699
+ var __filename2 = fileURLToPath2(import.meta.url);
4700
+ var __dirname3 = path2.dirname(__filename2);
4701
+ var DOCS_PATH = path2.join(__dirname3, "docs");
4702
+ async function findMarkdownFiles(dir) {
4703
+ const files = [];
4704
+ const entries = await fs7.readdir(dir, { withFileTypes: true });
4705
+ for (const entry of entries) {
4706
+ const fullPath = path2.join(dir, entry.name);
4707
+ if (entry.isDirectory()) {
4708
+ if (entry.name === "tutorial-basics" || entry.name === "tutorial-extras") {
4709
+ continue;
4710
+ }
4711
+ files.push(...await findMarkdownFiles(fullPath));
4712
+ } else if (entry.isFile() && entry.name.endsWith(".md")) {
4713
+ if (entry.name === "IMPLEMENTATION.md") {
4714
+ continue;
4715
+ }
4716
+ files.push(fullPath);
4717
+ }
4718
+ }
4719
+ return files;
4720
+ }
4721
+ async function parseMarkdownFile(filePath) {
4722
+ try {
4723
+ const content = await fs7.readFile(filePath, "utf-8");
4724
+ const { data, content: markdownContent } = matter(content);
4725
+ const mcpMetadata = data.mcp || {};
4726
+ const relativePath = path2.relative(DOCS_PATH, filePath);
4727
+ const resource = {
4728
+ uri: mcpMetadata.uri || `file:///docs/${relativePath}`,
4729
+ name: path2.basename(filePath),
4730
+ title: data.title || path2.basename(filePath, ".md"),
4731
+ description: data.description || "",
4732
+ mimeType: mcpMetadata.mimeType || "text/markdown"
4733
+ };
4734
+ const annotations = {};
4735
+ if (data.tags && Array.isArray(data.tags)) {
4736
+ const audienceTags = data.tags.filter(
4737
+ (t) => ["user", "assistant"].includes(t)
4738
+ );
4739
+ if (audienceTags.length > 0) {
4740
+ annotations.audience = audienceTags;
4741
+ }
4742
+ }
4743
+ if (mcpMetadata.priority !== void 0 && typeof mcpMetadata.priority === "number") {
4744
+ annotations.priority = mcpMetadata.priority;
4745
+ }
4746
+ if (mcpMetadata.lastModified) {
4747
+ annotations.lastModified = mcpMetadata.lastModified;
4748
+ }
4749
+ if (Object.keys(annotations).length > 0) {
4750
+ resource.annotations = annotations;
4751
+ }
4752
+ return {
4753
+ metadata: resource,
4754
+ content: markdownContent.trim()
4755
+ };
4756
+ } catch (error) {
4757
+ console.error(`Failed to parse ${filePath}:`, error);
4758
+ return null;
4759
+ }
4760
+ }
4761
+ async function loadDocumentationResources() {
4762
+ const resources = /* @__PURE__ */ new Map();
4763
+ try {
4764
+ console.error(`\u{1F4C2} Looking for docs in: ${DOCS_PATH}`);
4765
+ const markdownFiles = await findMarkdownFiles(DOCS_PATH);
4766
+ console.error(`\u{1F4C4} Found ${markdownFiles.length} markdown files:`, markdownFiles);
4767
+ for (const filePath of markdownFiles) {
4768
+ const doc = await parseMarkdownFile(filePath);
4769
+ if (doc) {
4770
+ console.error(` \u2713 Loaded: ${doc.metadata.title} (${doc.metadata.uri})`);
4771
+ resources.set(doc.metadata.uri, doc);
4772
+ }
4773
+ }
4774
+ console.error(`\u{1F4DA} Loaded ${resources.size} documentation resources`);
4775
+ } catch (error) {
4776
+ console.error("\u274C Failed to load documentation resources:", error);
4777
+ }
4778
+ return resources;
4779
+ }
4780
+ async function registerResources(server) {
4781
+ console.error("\u{1F527} Starting resource registration...");
4782
+ const resources = await loadDocumentationResources();
4783
+ for (const [uri, doc] of resources.entries()) {
4784
+ console.error(` \u{1F4DD} Registering: ${doc.metadata.name} -> ${uri}`);
4785
+ server.registerResource(
4786
+ doc.metadata.name,
4787
+ uri,
4788
+ {
4789
+ title: doc.metadata.title,
4790
+ description: doc.metadata.description,
4791
+ mimeType: doc.metadata.mimeType,
4792
+ annotations: doc.metadata.annotations
4793
+ },
4794
+ async () => ({
4795
+ contents: [
4796
+ {
4797
+ uri,
4798
+ mimeType: doc.metadata.mimeType,
4799
+ text: doc.content
4800
+ }
4801
+ ]
4802
+ })
4803
+ );
4804
+ }
4805
+ console.error(`\u2705 Registered ${resources.size} documentation resources with MCP server`);
4806
+ }
4807
+
4694
4808
  // src/ports/api-client.ts
4695
4809
  var ApiError = class extends Error {
4696
4810
  constructor(statusCode, message) {
@@ -4704,8 +4818,8 @@ var ApiError = class extends Error {
4704
4818
  function createHttpApiClient(config) {
4705
4819
  const { baseUrl, accessToken } = config;
4706
4820
  return {
4707
- async fetch(path2, options = {}) {
4708
- const url = `${baseUrl}${path2}`;
4821
+ async fetch(path3, options = {}) {
4822
+ const url = `${baseUrl}${path3}`;
4709
4823
  const headers = {
4710
4824
  ...options.headers,
4711
4825
  "Authorization": `Bearer ${accessToken}`,
@@ -4774,7 +4888,7 @@ function registerApiTool(server, apiClient, config) {
4774
4888
  async (params) => {
4775
4889
  try {
4776
4890
  const validatedInput = config.inputValidator.parse(params);
4777
- const path2 = typeof config.path === "function" ? config.path(validatedInput) : config.path;
4891
+ const path3 = typeof config.path === "function" ? config.path(validatedInput) : config.path;
4778
4892
  const body = config.transformInput ? config.transformInput(validatedInput) : validatedInput;
4779
4893
  const requestOptions = {
4780
4894
  method: config.method
@@ -4782,7 +4896,7 @@ function registerApiTool(server, apiClient, config) {
4782
4896
  if (["POST", "PATCH", "PUT"].includes(config.method)) {
4783
4897
  requestOptions.body = JSON.stringify(body);
4784
4898
  }
4785
- const response = await apiClient.fetch(path2, requestOptions);
4899
+ const response = await apiClient.fetch(path3, requestOptions);
4786
4900
  const validatedResponse = config.outputValidator.parse(response);
4787
4901
  const message = config.successMessage ? config.successMessage(validatedResponse) : void 0;
4788
4902
  return createSuccessResponse(validatedResponse, message);
@@ -5598,6 +5712,7 @@ async function main() {
5598
5712
  console.error(`\u{1F4C5} Token expires: ${expiresAtDate.toISOString()} (in ~${daysUntilExpiry} days)`);
5599
5713
  }
5600
5714
  registerTools(server, env.CRONICORN_API_URL, credentials);
5715
+ await registerResources(server);
5601
5716
  const transport = new StdioServerTransport();
5602
5717
  await server.connect(transport);
5603
5718
  console.error("\u2705 Cronicorn MCP Server running");