morpheus-cli 0.5.6 → 0.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -73,151 +73,174 @@ export class Apoc {
73
73
  source: "Apoc",
74
74
  });
75
75
  const systemMessage = new SystemMessage(`
76
- You are Apoc, a specialized devtools subagent within the Morpheus system.
77
-
78
- You are called by Oracle when the user needs dev operations performed.
79
- Your job is to execute the requested task accurately using your available tools.
80
-
81
- Available capabilities:
82
- - Read, write, append, and delete files
83
- - Execute shell commands
84
- - Inspect and manage processes
85
- - Run git operations (status, log, diff, clone, commit, etc.)
86
- - Perform network operations (curl, DNS, ping)
87
- - Manage packages (npm, yarn)
88
- - Inspect system information
89
- - Navigate websites, inspect DOM, click elements, fill forms using a real browser (for JS-heavy pages and SPAs)
90
- - Search the internet with browser_search (DuckDuckGo, returns structured results)
91
-
92
- OPERATING RULES:
93
- 1. Use tools to accomplish the task. Do not speculate.
94
- 2. Always verify results after execution.
95
- 3. Report clearly what was done and what the result was.
96
- 4. If something fails, report the error and what you tried.
97
- 5. Stay focused on the delegated task only.
98
- 6. Respond in the language requested by the user. If not explicit, use the dominant language of the task/context.
99
- 7. For connectivity checks, prefer the dedicated network tool "ping" (TCP reachability) instead of shell "ping".
100
- 8. Only use shell ping when explicitly required by the user. If shell ping is needed, detect OS first:
101
- - Windows: use "-n" (never use "-c")
102
- - Linux/macOS: use "-c"
103
-
104
-
105
- ────────────────────────────────────────
106
- BROWSER AUTOMATION PROTOCOL
107
- ────────────────────────────────────────
108
-
109
- When using browser tools (browser_navigate, browser_get_dom, browser_click, browser_fill), follow this protocol exactly.
110
-
111
- GENERAL PRINCIPLES
112
- - Never guess selectors.
113
- - Never assume page state.
114
- - Always verify page transitions.
115
- - Always extract evidence of success.
116
- - If required user data is missing, STOP and return to Oracle immediately.
117
-
118
- PHASE 1 — Navigation
119
- 1. ALWAYS call browser_navigate first.
120
- 2. Use:
121
- - wait_until: "networkidle0" for SPAs or JS-heavy pages.
122
- - wait_until: "domcontentloaded" for simple pages.
123
- 3. After navigation, confirm current_url and title.
124
- 4. If navigation fails, report the error and stop.
125
-
126
- PHASE 2 — DOM Inspection (MANDATORY BEFORE ACTION)
127
- 1. ALWAYS call browser_get_dom before browser_click or browser_fill.
128
- 2. Identify stable selectors (prefer id > name > role > unique class).
129
- 3. Understand page structure and expected flow before interacting.
130
- 4. Never click or fill blindly.
131
-
132
- PHASE 3 — Interaction
133
- When clicking:
134
- - Prefer stable selectors.
135
- - If ambiguous, refine selector.
136
- - Use visible text only if selector is unstable.
137
-
138
- When filling:
139
- - Confirm correct input field via DOM.
140
- - Fill field.
141
- - Submit using press_enter OR clicking submit button.
142
-
143
- If login or personal data is required:
144
- STOP and return required fields clearly.
145
-
146
- PHASE 4 — State Verification (MANDATORY)
147
- After ANY interaction:
148
- 1. Call browser_get_dom again.
149
- 2. Verify URL change or content change.
150
- 3. Confirm success or detect error message.
151
-
152
- If expected change did not occur:
153
- - Reinspect DOM.
154
- - Attempt one justified alternative.
155
- - If still failing, report failure clearly.
156
-
157
- Maximum 2 attempts per step.
158
- Never assume success.
159
-
160
- PHASE 5 — Reporting
161
- Include:
162
- - Step-by-step actions
163
- - Final URL
164
- - Evidence of success
165
- - Errors encountered
166
- - Completion status (true/false)
167
-
168
-
169
- ────────────────────────────────────────
170
- WEB RESEARCH PROTOCOL
171
- ────────────────────────────────────────
172
-
173
- When using browser_search for factual verification, follow this protocol strictly.
174
-
175
- PHASE 1 — Query Design
176
- 1. Identify core entity, information type, and time constraint.
177
- 2. Build a precise search query.
178
- 3. If time-sensitive, include the current year.
179
-
180
- PHASE 2 — Source Discovery
181
- 1. Call browser_search.
182
- 2. Collect results.
183
- 3. Prioritize official sources and major publications.
184
- 4. Reformulate query if necessary.
185
- 5. IMMEDIATELY save the search result titles and snippets — you will need them as fallback.
186
-
187
- PHASE 3 — Source Validation
188
- 1. Try to open up to 3 distinct URLs with browser_navigate.
189
- - For news/sports/media sites (GE, Globo, UOL, Terra, ESPN, etc.): ALWAYS use wait_until: "networkidle0" — these are SPAs that require JavaScript to load content.
190
- - For simple/static pages: use wait_until: "domcontentloaded".
191
- 2. Read actual page content from accessible pages.
192
- 3. Ignore inaccessible pages (timeouts, bot blocks, errors).
193
- 4. If ALL navigations fail OR page content does not contain useful information:
194
- - DO NOT attempt further workarounds (wget, curl, python scripts, http_request).
195
- - Use the search snippets from Phase 2 as your source and proceed to Phase 5.
196
-
197
- PHASE 4 — Cross-Verification
198
- 1. Extract relevant information from each accessible source.
199
- 2. Compare findings across sources when possible.
200
- 3. If content came from snippets only, state clearly:
201
- "Source: DuckDuckGo search snippets (direct page access unavailable)."
202
-
203
- PHASE 5 — Structured Report
204
- Include:
205
- - Direct answer based ONLY on what was found online
206
- - Source URLs (from search results or navigated pages)
207
- - Confidence level (High / Medium / Low)
208
-
209
- ABSOLUTE RULES — NEVER VIOLATE
210
- 1. NEVER use prior knowledge to fill gaps when online tools failed to find information.
211
- 2. NEVER fabricate, invent, or speculate about news, facts, prices, results, or events.
212
- 3. If browser_search returned results: ALWAYS report those results — never say "no results found".
213
- 4. If content could not be extracted from pages: report the search snippets verbatim.
214
- 5. If both search and navigation failed: say exactly "I was unable to retrieve this information online at this time." Stop there. Do not continue with "based on general knowledge...".
215
- 6. Do NOT attempt more than 2 workaround approaches (wget, curl, python) — if the primary tools fail, move immediately to fallback (snippets) or honest failure report.
76
+ You are Apoc, a high-reliability execution and verification subagent inside the Morpheus system.
216
77
 
78
+ You are NOT a conversational assistant.
79
+ You are a task executor, evidence collector, and autonomous verifier.
217
80
 
81
+ Accuracy is more important than speed.
82
+ If verification fails, you must state it clearly.
83
+
84
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
85
+ CORE PRINCIPLES
86
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
87
+
88
+ • Never fabricate.
89
+ • Never rely on prior knowledge when online tools are available.
90
+ • Prefer authoritative sources over secondary commentary.
91
+ • Prefer verification over assumption.
92
+ • Explicitly measure and report confidence.
93
+
94
+ If reliable evidence cannot be obtained:
95
+ State clearly:
96
+ "I was unable to retrieve this information online at this time."
97
+
98
+ Stop there.
99
+
100
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
101
+ TASK CLASSIFICATION
102
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
103
+
104
+ Before using tools:
105
+
106
+ 1. Identify task type:
107
+ - Dev operation
108
+ - Web research
109
+ - Browser automation
110
+ - System inspection
111
+ - Network verification
112
+
113
+ 2. Determine whether external verification is required.
114
+ If yes → use tools.
115
+ If no → respond directly.
116
+
117
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
118
+ WEB RESEARCH STRATEGY (QUALITY-FIRST)
119
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
120
+
121
+ You operate in iterative cycles.
122
+
123
+ Maximum cycles: 2
124
+
125
+ ━━━━━━━━━━━━━━
126
+ CYCLE 1
127
+ ━━━━━━━━━━━━━━
128
+
129
+ PHASE 1 — Intelligent Query Design
130
+ • Identify intent: news, official, documentation, price, general.
131
+ • Add year if time-sensitive.
132
+ • Add region if relevant.
133
+ • Make query precise and focused.
134
+
135
+ PHASE 2 — Search
136
+ • Use browser_search.
137
+ • Immediately store titles and snippets.
138
+
139
+ PHASE 3 — Source Selection
140
+ Select up to 3 URLs.
141
+ Prefer:
142
+ - One official source
143
+ - One major publication
144
+ - One independent alternative
145
+ Avoid:
146
+ - Multiple links from same domain group
147
+ - Obvious paywalls or login walls
148
+
149
+ PHASE 4 — Navigation & Extraction
150
+ • Use browser_navigate.
151
+ • For news/media → wait_until: "networkidle0"
152
+ • Extract content from:
153
+ article > main > body
154
+ • Remove navigation noise.
155
+
156
+ PHASE 5 — Cross Verification
157
+ • Compare findings across sources.
158
+ • Detect inconsistencies.
159
+ • Identify strongest source.
160
+
161
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
162
+ AUTO-REFINEMENT LOOP
163
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
164
+
165
+ After completing Cycle 1, evaluate:
166
+
167
+ Trigger refinement if ANY condition is true:
168
+
169
+ • No authoritative source was successfully opened.
170
+ • Only snippets were available.
171
+ • Extracted content did not contain concrete answer.
172
+ • Sources contradict each other.
173
+ • Confidence would be LOW.
174
+ • Search results appear irrelevant or weak.
175
+
176
+ If refinement is triggered:
177
+
178
+ 1. Reformulate query:
179
+ - Add year
180
+ - Add country
181
+ - Add "official"
182
+ - Add domain filters (gov, org, major media)
183
+ - Remove ambiguous words
184
+
185
+ 2. Execute a second search cycle (Cycle 2).
186
+ 3. Repeat selection, navigation, extraction, verification.
187
+ 4. Choose the stronger cycle’s evidence.
188
+ 5. Do NOT perform more than 2 cycles.
189
+
190
+ If Cycle 2 also fails:
191
+ Report inability clearly.
192
+
193
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
194
+ SELF-CRITIQUE (MANDATORY BEFORE OUTPUT)
195
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
196
+
197
+ Internally evaluate:
198
+
199
+ 1. Did I use at least one authoritative source when available?
200
+ 2. Did I rely only on snippets unnecessarily?
201
+ 3. Did I merge conflicting data incorrectly?
202
+ 4. Did I verify the page actually contained the requested information?
203
+ 5. Did I introduce any information not explicitly found online?
204
+ 6. Is my confidence level justified?
205
+
206
+ If issues are found:
207
+ Correct them.
208
+ If correction is not possible:
209
+ Lower confidence explicitly.
210
+
211
+ Do NOT expose this checklist.
212
+
213
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
214
+ CONFIDENCE CRITERIA
215
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
216
+
217
+ HIGH:
218
+ • Multiple independent authoritative sources agree
219
+ • Full page extraction used
220
+
221
+ MEDIUM:
222
+ • One strong source OR minor inconsistencies
223
+ • Partial verification
224
+
225
+ LOW:
226
+ • Snippets only OR weak sources OR incomplete confirmation
227
+
228
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
229
+ OUTPUT FORMAT (STRICT)
230
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
231
+
232
+ 1. Direct Answer
233
+ 2. Evidence Summary
234
+ 3. Sources (URLs)
235
+ 4. Confidence Level (HIGH / MEDIUM / LOW)
236
+ 5. Completion Status (true / false)
237
+
238
+ No conversational filler.
239
+ No reasoning trace.
240
+ Only structured output.
218
241
 
219
242
  ${context ? `CONTEXT FROM ORACLE:\n${context}` : ""}
220
- `);
243
+ `);
221
244
  const userMessage = new HumanMessage(task);
222
245
  const messages = [systemMessage, userMessage];
223
246
  try {
@@ -0,0 +1,215 @@
1
+ import * as chrono from 'chrono-node';
2
+ import CronParser from 'cron-parser';
3
+ const parseCron = CronParser.parseExpression.bind(CronParser);
4
+ import cronstrue from 'cronstrue';
5
+ // Maps interval phrases like "every 30 minutes" to a cron expression.
6
+ function intervalToCron(expression) {
7
+ const lower = expression.toLowerCase().trim();
8
+ // ── Quantified intervals ──────────────────────────────────────────────────
9
+ // "every N minutes"
10
+ const minuteMatch = lower.match(/every\s+(\d+)\s+min(?:ute)?s?/);
11
+ if (minuteMatch)
12
+ return `*/${minuteMatch[1]} * * * *`;
13
+ // "every N hours"
14
+ const hourMatch = lower.match(/every\s+(\d+)\s+hours?/);
15
+ if (hourMatch)
16
+ return `0 */${hourMatch[1]} * * *`;
17
+ // "every N days"
18
+ const dayMatch = lower.match(/every\s+(\d+)\s+days?/);
19
+ if (dayMatch)
20
+ return `0 0 */${dayMatch[1]} * *`;
21
+ // "every N weeks" → approximate as every N*7 days
22
+ const weekNMatch = lower.match(/every\s+(\d+)\s+weeks?/);
23
+ if (weekNMatch)
24
+ return `0 0 */${Number(weekNMatch[1]) * 7} * *`;
25
+ // ── Single-unit shorthands ────────────────────────────────────────────────
26
+ if (/every\s+minute/.test(lower))
27
+ return `* * * * *`;
28
+ if (/every\s+hour/.test(lower))
29
+ return `0 * * * *`;
30
+ if (/every\s+day/.test(lower) || lower === 'daily')
31
+ return `0 0 * * *`;
32
+ if (/every\s+week(?!\w)/.test(lower) || lower === 'weekly')
33
+ return `0 0 * * 0`;
34
+ // ── Weekday / weekend ─────────────────────────────────────────────────────
35
+ if (/every\s+weekday/.test(lower))
36
+ return `0 0 * * 1-5`;
37
+ if (/every\s+weekend/.test(lower))
38
+ return `0 0 * * 0,6`;
39
+ // ── Named day(s)-of-week with optional "at HH[:MM] [am|pm]" ─────────────
40
+ // Handles single and multiple days:
41
+ // "every monday"
42
+ // "every monday and sunday at 9am"
43
+ // "every monday, wednesday and friday at 18:30"
44
+ const DOW = {
45
+ sunday: 0, sun: 0,
46
+ monday: 1, mon: 1,
47
+ tuesday: 2, tue: 2,
48
+ wednesday: 3, wed: 3,
49
+ thursday: 4, thu: 4,
50
+ friday: 5, fri: 5,
51
+ saturday: 6, sat: 6,
52
+ };
53
+ const DAY_NAMES = 'sunday|monday|tuesday|wednesday|thursday|friday|saturday|sun|mon|tue|wed|thu|fri|sat';
54
+ // Strip the leading "every " then capture the day-list and optional time tail
55
+ const multiDowRe = new RegExp(`^every\\s+((?:(?:${DAY_NAMES})(?:\\s*(?:,|\\band\\b)\\s*)?)*)(?:\\s+at\\s+(\\d{1,2})(?::(\\d{2}))?\\s*(am|pm)?)?$`);
56
+ const multiDowMatch = lower.match(multiDowRe);
57
+ if (multiDowMatch) {
58
+ const dayListStr = multiDowMatch[1];
59
+ const foundDays = dayListStr.match(new RegExp(DAY_NAMES, 'g'));
60
+ if (foundDays && foundDays.length > 0) {
61
+ const dowValues = [...new Set(foundDays.map((d) => DOW[d]))].sort((a, b) => a - b);
62
+ let hour = 0;
63
+ let minute = 0;
64
+ if (multiDowMatch[2]) {
65
+ hour = parseInt(multiDowMatch[2], 10);
66
+ minute = multiDowMatch[3] ? parseInt(multiDowMatch[3], 10) : 0;
67
+ const period = multiDowMatch[4];
68
+ if (period === 'pm' && hour < 12)
69
+ hour += 12;
70
+ if (period === 'am' && hour === 12)
71
+ hour = 0;
72
+ }
73
+ return `${minute} ${hour} * * ${dowValues.join(',')}`;
74
+ }
75
+ }
76
+ throw new Error(`Cannot parse interval expression: "${expression}". ` +
77
+ `Supported formats: "every N minutes/hours/days/weeks", "every minute/hour/day/week", ` +
78
+ `"every monday [at 9am]", "every monday and friday at 18:30", "every weekday", "every weekend", "daily", "weekly".`);
79
+ }
80
+ function formatDatetime(date, timezone) {
81
+ try {
82
+ return date.toLocaleString('en-US', {
83
+ timeZone: timezone,
84
+ year: 'numeric',
85
+ month: 'short',
86
+ day: 'numeric',
87
+ hour: '2-digit',
88
+ minute: '2-digit',
89
+ timeZoneName: 'short',
90
+ });
91
+ }
92
+ catch {
93
+ return date.toISOString();
94
+ }
95
+ }
96
+ export function parseScheduleExpression(expression, type, opts = {}) {
97
+ const timezone = opts.timezone ?? 'UTC';
98
+ const refDate = opts.referenceDate ? new Date(opts.referenceDate) : new Date();
99
+ switch (type) {
100
+ case 'once': {
101
+ let parsed = null;
102
+ // 1. Relative duration: "in N minutes/hours/days/weeks" (handles abbreviations like "in 5 min")
103
+ const relMatch = expression.toLowerCase().trim().match(/^in\s+(\d+)\s+(min(?:ute)?s?|hours?|days?|weeks?)$/);
104
+ if (relMatch) {
105
+ const amount = parseInt(relMatch[1], 10);
106
+ const unit = relMatch[2];
107
+ const ms = unit.startsWith('min') ? amount * 60_000
108
+ : unit.startsWith('hour') ? amount * 3_600_000
109
+ : unit.startsWith('day') ? amount * 86_400_000
110
+ : amount * 7 * 86_400_000;
111
+ parsed = new Date(refDate.getTime() + ms);
112
+ }
113
+ // 2. ISO 8601
114
+ if (!parsed) {
115
+ const isoDate = new Date(expression);
116
+ if (!isNaN(isoDate.getTime()))
117
+ parsed = isoDate;
118
+ }
119
+ // 3. chrono-node NLP fallback ("tomorrow at 9am", "next friday", etc.)
120
+ if (!parsed) {
121
+ const results = chrono.parse(expression, { instant: refDate, timezone });
122
+ if (results.length > 0 && results[0].date()) {
123
+ parsed = results[0].date();
124
+ }
125
+ }
126
+ if (!parsed) {
127
+ throw new Error(`Could not parse date/time expression: "${expression}". ` +
128
+ `Try: "in 30 minutes", "in 2 hours", "tomorrow at 9am", "next friday at 3pm", or an ISO 8601 datetime.`);
129
+ }
130
+ if (parsed.getTime() <= refDate.getTime()) {
131
+ throw new Error(`Scheduled time must be in the future. Got: "${expression}" which resolves to ${parsed.toISOString()}.`);
132
+ }
133
+ return {
134
+ type: 'once',
135
+ next_run_at: parsed.getTime(),
136
+ cron_normalized: null,
137
+ human_readable: formatDatetime(parsed, timezone),
138
+ };
139
+ }
140
+ case 'cron': {
141
+ let interval;
142
+ try {
143
+ interval = parseCron(expression, { tz: timezone, currentDate: refDate });
144
+ }
145
+ catch (err) {
146
+ throw new Error(`Invalid cron expression: "${expression}". ${err.message}`);
147
+ }
148
+ // Enforce minimum 60s interval by checking two consecutive occurrences
149
+ const first = interval.next().toDate();
150
+ const second = interval.next().toDate();
151
+ const intervalMs = second.getTime() - first.getTime();
152
+ if (intervalMs < 60000) {
153
+ throw new Error(`Minimum interval is 60 seconds. The cron expression "${expression}" triggers more frequently.`);
154
+ }
155
+ // Recompute for next_run_at (cron-parser iterator was advanced above)
156
+ const nextInterval = parseCron(expression, { tz: timezone, currentDate: refDate });
157
+ const next = nextInterval.next().toDate();
158
+ let human_readable;
159
+ try {
160
+ human_readable = cronstrue.toString(expression, { throwExceptionOnParseError: true });
161
+ }
162
+ catch {
163
+ human_readable = expression;
164
+ }
165
+ return {
166
+ type: 'cron',
167
+ next_run_at: next.getTime(),
168
+ cron_normalized: expression,
169
+ human_readable,
170
+ };
171
+ }
172
+ case 'interval': {
173
+ const cronExpr = intervalToCron(expression);
174
+ // Validate via cron case (will also enforce minimum 60s)
175
+ const result = parseScheduleExpression(cronExpr, 'cron', opts);
176
+ let human_readable;
177
+ try {
178
+ human_readable = cronstrue.toString(cronExpr, { throwExceptionOnParseError: true });
179
+ }
180
+ catch {
181
+ human_readable = expression;
182
+ }
183
+ return {
184
+ type: 'interval',
185
+ next_run_at: result.next_run_at,
186
+ cron_normalized: cronExpr,
187
+ human_readable,
188
+ };
189
+ }
190
+ default:
191
+ throw new Error(`Unknown schedule type: "${type}"`);
192
+ }
193
+ }
194
+ /**
195
+ * Compute the next occurrence for a recurring job after execution.
196
+ * Used by ChronosWorker after each successful trigger.
197
+ */
198
+ export function parseNextRun(cronNormalized, timezone, referenceDate) {
199
+ const refDate = referenceDate ? new Date(referenceDate) : new Date();
200
+ const interval = parseCron(cronNormalized, { tz: timezone, currentDate: refDate });
201
+ return interval.next().toDate().getTime();
202
+ }
203
+ /**
204
+ * Compute the next N occurrences for a recurring schedule.
205
+ * Used by the preview endpoint.
206
+ */
207
+ export function getNextOccurrences(cronNormalized, timezone, count = 3, referenceDate) {
208
+ const refDate = referenceDate ? new Date(referenceDate) : new Date();
209
+ const interval = parseCron(cronNormalized, { tz: timezone, currentDate: refDate });
210
+ const results = [];
211
+ for (let i = 0; i < count; i++) {
212
+ results.push(interval.next().toDate().getTime());
213
+ }
214
+ return results;
215
+ }
@@ -0,0 +1,63 @@
1
+ import { describe, it, expect } from 'vitest';
2
+ import { parseScheduleExpression } from './parser.js';
3
+ const FUTURE_MS = Date.now() + 60_000 * 60 * 24; // 24 hours from now
4
+ const REF = Date.now();
5
+ describe('parseScheduleExpression — once type', () => {
6
+ it('parses a valid ISO datetime in the future', () => {
7
+ const future = new Date(FUTURE_MS).toISOString();
8
+ const result = parseScheduleExpression(future, 'once', { referenceDate: REF });
9
+ expect(result.type).toBe('once');
10
+ expect(result.next_run_at).toBeGreaterThan(REF);
11
+ expect(result.cron_normalized).toBeNull();
12
+ expect(result.human_readable).toBeTruthy();
13
+ });
14
+ it('throws for a past datetime', () => {
15
+ const past = new Date(Date.now() - 1000).toISOString();
16
+ expect(() => parseScheduleExpression(past, 'once', { referenceDate: REF })).toThrow(/must be in the future/i);
17
+ });
18
+ it('parses natural language "tomorrow at 9am" in a given timezone', () => {
19
+ const result = parseScheduleExpression('tomorrow at 9am', 'once', {
20
+ timezone: 'America/Sao_Paulo',
21
+ referenceDate: REF,
22
+ });
23
+ expect(result.type).toBe('once');
24
+ expect(result.next_run_at).toBeGreaterThan(REF);
25
+ expect(result.cron_normalized).toBeNull();
26
+ });
27
+ });
28
+ describe('parseScheduleExpression — cron type', () => {
29
+ it('parses a valid 5-field cron expression', () => {
30
+ const result = parseScheduleExpression('0 9 * * 1-5', 'cron', { referenceDate: REF });
31
+ expect(result.type).toBe('cron');
32
+ expect(result.next_run_at).toBeGreaterThan(REF);
33
+ expect(result.cron_normalized).toBe('0 9 * * 1-5');
34
+ expect(result.human_readable.length).toBeGreaterThan(0);
35
+ });
36
+ it('throws for an invalid cron expression', () => {
37
+ expect(() => parseScheduleExpression('not a cron', 'cron', { referenceDate: REF })).toThrow(/invalid cron/i);
38
+ });
39
+ it('throws when cron interval is less than 60 seconds (every minute)', () => {
40
+ // "* * * * *" fires every 60s — exactly at the boundary. Accept it.
41
+ // "*/30 * * * * *" (6-field sub-minute) would fail but cron-parser v4 uses 5-field only.
42
+ // We test a cron that would trigger at sub-minute intervals if possible.
43
+ // For 5-field cron the minimum is 60s — "* * * * *" is exactly 60s so it's valid.
44
+ const result = parseScheduleExpression('* * * * *', 'cron', { referenceDate: REF });
45
+ expect(result.next_run_at).toBeGreaterThan(REF);
46
+ });
47
+ });
48
+ describe('parseScheduleExpression — interval type', () => {
49
+ it('converts "every 30 minutes" to a valid cron with interval >= 60s', () => {
50
+ const result = parseScheduleExpression('every 30 minutes', 'interval', { referenceDate: REF });
51
+ expect(result.type).toBe('interval');
52
+ expect(result.next_run_at).toBeGreaterThan(REF);
53
+ expect(result.cron_normalized).toBe('*/30 * * * *');
54
+ expect(result.human_readable.length).toBeGreaterThan(0);
55
+ });
56
+ it('converts "every hour" to a valid cron', () => {
57
+ const result = parseScheduleExpression('every hour', 'interval', { referenceDate: REF });
58
+ expect(result.cron_normalized).toBe('0 * * * *');
59
+ });
60
+ it('throws for an unsupported interval phrase', () => {
61
+ expect(() => parseScheduleExpression('every 30 seconds', 'interval', { referenceDate: REF })).toThrow();
62
+ });
63
+ });