prjct-cli 1.7.4 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,142 @@
1
1
  # Changelog
2
2
 
3
+ ## [1.8.0] - 2026-02-07
4
+
5
+ ### Features
6
+
7
+ - add Fibonacci estimation with variance tracking (PRJ-295) (#145)
8
+ - add PerformanceTracker for CLI metrics (PRJ-297) (#146)
9
+
10
+ ### Bug Fixes
11
+
12
+ - replace hardcoded command lists with config-driven context (PRJ-298) (#147)
13
+
14
+
15
+ ## [1.8.0] - 2026-02-07
16
+
17
+ ### Features
18
+ - **Fibonacci estimation with variance tracking (PRJ-295)**: Capture Fibonacci point estimates (1,2,3,5,8,13,21) on task start with automatic points-to-time conversion, record actual duration on done, and display estimation variance.
19
+
20
+ ### Implementation Details
21
+ - New `core/domain/fibonacci.ts` module: `FIBONACCI_POINTS`, `pointsToMinutes()`, `pointsToTimeRange()`, `findClosestPoint()`, `suggestFromHistory()`
22
+ - Added `estimatedPoints` and `estimatedMinutes` optional fields to `CurrentTaskSchema` and `SubtaskSchema`
23
+ - Added `updateCurrentTask()` partial update method to `StateStorage`
24
+ - `now()` handler returns `fibonacci` helper object with `storeEstimate(points)` for template use
25
+ - `done()` handler records outcomes via `outcomeRecorder.record()` and displays variance: `est: 5pt (1h 30m) → +50%`
26
+
27
+ ### Test Plan
28
+
29
+ #### For QA
30
+ 1. Start a task — verify `fibonacci` helper is returned with `storeEstimate()`, `pointsToMinutes()`, `pointsToTimeRange()`
31
+ 2. Call `storeEstimate(5)` — verify `estimatedPoints: 5` and `estimatedMinutes: 90` in state.json
32
+ 3. Complete task with `p. done` — verify outcome recorded to `outcomes/outcomes.jsonl`
33
+ 4. Verify variance display shows `est: 5pt (1h 30m) → +X%`
34
+ 5. Run `bun test` — 552 tests pass
35
+
36
+ #### For Users
37
+ **What changed:** Tasks now support Fibonacci point estimation with automatic time conversion and variance tracking on completion.
38
+ **How to use:** Estimation is stored via `storeEstimate(points)` during task start; variance is auto-displayed on `p. done`.
39
+ **Breaking changes:** None — estimation fields are optional.
40
+
41
+ ## [1.7.7] - 2026-02-07
42
+
43
+ ### Bug Fixes
44
+ - **Config-driven command context (PRJ-298)**: Replaced 4 hardcoded command lists in `prompt-builder.ts` with a single `command-context.config.json` config file. New commands no longer silently get zero context — the wildcard `*` entry provides sensible defaults, and a heuristic classifier handles unknown commands.
45
+ - **Quality checklists for ship/done**: `ship` and `done` commands now receive quality checklists (previously excluded from the hardcoded list).
46
+
47
+ ### Implementation Details
48
+ - Created `core/config/command-context.config.json` mapping 25 commands + wildcard to context sections (agents, patterns, checklists, modules)
49
+ - Zod schema in `core/schemas/command-context.ts` validates config at load time
50
+ - `core/agentic/command-context.ts` provides `resolveCommandContextFull()` with fallback chain: config → cache → heuristic classify → wildcard
51
+ - `core/agentic/command-classifier.ts` uses word-boundary keyword matching with score-based priority to classify unknown commands from template metadata
52
+ - Auto-learn (Phase 3): after 3 identical heuristic classifications, persists to config file via fire-and-forget
53
+
54
+ ### Learnings
55
+ - Keyword substring matching causes false positives (e.g., "check" matching inside "checks") — word boundaries via `\b` regex are essential
56
+ - When quality and info keywords overlap, score-based priority (higher count wins) is more robust than boolean exclusion
57
+
58
+ ### Test Plan
59
+
60
+ #### For QA
61
+ 1. Run `bun test ./core/__tests__/agentic/command-context.test.ts` — all 20 tests pass
62
+ 2. Run `bun test ./core/__tests__/agentic/prompt-builder.test.ts` — all 16 existing tests pass
63
+ 3. Run `bun run build` — compiles without errors
64
+ 4. Verify `ship` and `done` commands have `checklist: true` in config
65
+ 5. Verify unknown commands get wildcard defaults (agents: true, patterns: true)
66
+
67
+ #### For Users
68
+ **What changed:** Commands like `ship` and `done` now receive quality checklists. New commands automatically get sensible context instead of nothing.
69
+ **How to use:** No user action needed — works automatically.
70
+ **Breaking changes:** None
71
+
72
+ ## [1.7.6] - 2026-02-07
73
+
74
+ ### Features
75
+ - **PerformanceTracker service (PRJ-297)**: New `core/infrastructure/performance-tracker.ts` singleton that automatically measures startup time, memory usage, and command durations on every CLI invocation. Data stored in append-only JSONL with 5MB rotation.
76
+ - **`prjct perf` dashboard command**: Shows performance metrics vs targets for the last N days (default 7). Displays startup time, heap/RSS memory, context correctness rate, subtask handoff rate, and per-command duration breakdown.
77
+ - **Zod schemas for performance metrics**: `core/schemas/performance.ts` with typed schemas for all metric types (timing, memory, context correctness, subtask handoff, analysis state).
78
+
79
+ ### Implementation Details
80
+ - PerformanceTracker uses `process.hrtime.bigint()` for nanosecond-precision timing and `process.memoryUsage()` for memory snapshots
81
+ - Startup time captured at top of `bin/prjct.ts` via `globalThis.__perfStartNs` and recorded in `core/index.ts` after command execution
82
+ - All instrumentation wrapped in non-critical try/catch to prevent perf tracking from breaking CLI functionality
83
+ - Uses existing `jsonl-helper.appendJsonLineWithRotation` for storage (5MB rotation limit)
84
+ - 17 unit tests covering timing, memory, recording, context correctness, handoff, and report generation
85
+
86
+ ### Learnings
87
+ - JSONL append-only pattern with rotation is ideal for time-series metrics (vs JSON write-through for stateful data)
88
+ - `globalThis` works well for passing data between `bin/` entry point and `core/` modules without import coupling
89
+ - `process.memoryUsage().heapUsed` can momentarily exceed `heapTotal` during GC — don't assert `<=`
90
+
91
+ ### Test Plan
92
+
93
+ #### For QA
94
+ 1. Run `prjct status` then `prjct perf` — verify metrics appear (startup time, memory, command duration)
95
+ 2. Run multiple commands then `prjct perf 1` — verify all commands show in dashboard
96
+ 3. Check `~/.prjct-cli/projects/{id}/storage/performance.jsonl` exists with valid JSONL entries
97
+ 4. Verify `prjct perf` with no data shows "No performance data yet" message
98
+ 5. Verify target indicators: startup `<500ms` green, `>500ms` yellow warning
99
+
100
+ #### For Users
101
+ **What changed:** New `prjct perf` command shows a performance dashboard with startup time, memory usage, and command duration metrics.
102
+ **How to use:** Run `prjct perf` (default 7 days) or `prjct perf 30` for 30-day view. Metrics are collected automatically.
103
+ **Breaking changes:** None
104
+
105
+
106
+ ## [1.7.5] - 2026-02-07
107
+
108
+ ### Refactoring
109
+
110
+ - remove unused deps and lazy-load @linear/sdk (PRJ-291) (#144)
111
+
112
+ ## [1.7.5] - 2026-02-07
113
+
114
+ ### Changed
115
+ - **Remove unused dependencies and lazy-load heavy optional ones (PRJ-291)**: Removed `lightningcss` (completely unused), moved `esbuild` to devDependencies (build-time only), lazy-loaded `@linear/sdk` via dynamic `import()` so it only loads when Linear commands are invoked.
116
+
117
+ ### Implementation Details
118
+ - Removed `lightningcss` from dependencies (zero imports in codebase)
119
+ - Moved `esbuild` from dependencies to devDependencies (only used in `scripts/build.js`)
120
+ - Changed `import { LinearClient } from '@linear/sdk'` to `import type` + dynamic `await import('@linear/sdk')` in `core/integrations/linear/client.ts`
121
+ - Excluded test files from published package via `.npmignore`
122
+ - Removed `scripts/build.js` from `files` field (dist/ ships pre-built)
123
+
124
+ ### Learnings
125
+ - `import type` + dynamic `await import()` pattern preserves full type safety while deferring module load to runtime. Type imports are erased at compile time with zero cost.
126
+
127
+ ### Test Plan
128
+
129
+ #### For QA
130
+ 1. Run `bun test` — 538 tests pass, no regressions
131
+ 2. Run `bun run build` — compiles without errors
132
+ 3. Run `bun run typecheck` — zero type errors
133
+ 4. Run `prjct status` and `prjct linear list` — CLI works normally
134
+
135
+ #### For Users
136
+ **What changed:** Faster install (~75MB fewer dependencies), faster CLI startup (Linear SDK only loaded on demand).
137
+ **How to use:** No changes needed.
138
+ **Breaking changes:** None
139
+
3
140
  ## [1.7.4] - 2026-02-07
4
141
 
5
142
  ### Bug Fixes
package/bin/prjct.ts CHANGED
@@ -8,6 +8,10 @@
8
8
  * auto-install on first CLI use. This is the reliable path.
9
9
  */
10
10
 
11
+ // Performance: capture process start time (nanosecond precision)
12
+ // Exposed via globalThis so core/index.ts can read it for startup time metrics
13
+ ;(globalThis as Record<string, unknown>).__perfStartNs = process.hrtime.bigint()
14
+
11
15
  import os from 'node:os'
12
16
  import path from 'node:path'
13
17
  import chalk from 'chalk'
@@ -85,6 +89,16 @@ async function trackSession(command: string): Promise<() => void> {
85
89
  return () => {
86
90
  const durationMs = Date.now() - start
87
91
  sessionTracker.trackCommand(projectId, command, durationMs).catch(() => {})
92
+
93
+ // Performance tracking (non-critical, lazy-loaded)
94
+ import('../core/infrastructure/performance-tracker')
95
+ .then(({ performanceTracker }) => {
96
+ performanceTracker
97
+ .recordTiming(projectId, 'command_duration', durationMs, { command })
98
+ .catch(() => {})
99
+ performanceTracker.recordMemory(projectId, { command }).catch(() => {})
100
+ })
101
+ .catch(() => {})
88
102
  }
89
103
  }
90
104
  } catch {
@@ -0,0 +1,281 @@
1
+ /**
2
+ * Command Context Tests
3
+ *
4
+ * Tests for config-driven command context resolution,
5
+ * classification, caching, and auto-learn.
6
+ *
7
+ * @see PRJ-298
8
+ */
9
+
10
+ import { describe, expect, it } from 'bun:test'
11
+ import { classifyCommand } from '../../agentic/command-classifier'
12
+ import {
13
+ cacheClassification,
14
+ getCachedClassification,
15
+ loadCommandContextConfig,
16
+ resolveCommandContext,
17
+ resolveCommandContextFull,
18
+ trackClassification,
19
+ } from '../../agentic/command-context'
20
+ import type { CommandContextEntry } from '../../schemas/command-context'
21
+ import type { Template } from '../../types'
22
+
23
+ // =============================================================================
24
+ // Config Loading
25
+ // =============================================================================
26
+
27
+ describe('Command Context Config', () => {
28
+ it('should load and validate the config file', async () => {
29
+ const config = await loadCommandContextConfig()
30
+
31
+ expect(config.version).toBe('1.0.0')
32
+ expect(config.commands).toBeDefined()
33
+ expect(config.commands['*']).toBeDefined()
34
+ })
35
+
36
+ it('should have wildcard entry with sensible defaults', async () => {
37
+ const config = await loadCommandContextConfig()
38
+ const wildcard = config.commands['*']
39
+
40
+ expect(wildcard.agents).toBe(true)
41
+ expect(wildcard.patterns).toBe(true)
42
+ expect(wildcard.checklist).toBe(false)
43
+ expect(wildcard.modules).toEqual([])
44
+ })
45
+
46
+ it('should have explicit entries for known commands', async () => {
47
+ const config = await loadCommandContextConfig()
48
+
49
+ expect(config.commands.task).toBeDefined()
50
+ expect(config.commands.ship).toBeDefined()
51
+ expect(config.commands.bug).toBeDefined()
52
+ expect(config.commands.done).toBeDefined()
53
+ expect(config.commands.sync).toBeDefined()
54
+ })
55
+ })
56
+
57
+ // =============================================================================
58
+ // Config Resolution
59
+ // =============================================================================
60
+
61
+ describe('resolveCommandContext', () => {
62
+ it('should return explicit config for known commands', async () => {
63
+ const config = await loadCommandContextConfig()
64
+ const entry = resolveCommandContext(config, 'task')
65
+
66
+ expect(entry.modules).toContain('CLAUDE-intelligence.md')
67
+ expect(entry.modules).toContain('CLAUDE-storage.md')
68
+ })
69
+
70
+ it('should return wildcard for unknown commands', async () => {
71
+ const config = await loadCommandContextConfig()
72
+ const entry = resolveCommandContext(config, 'nonexistent-command')
73
+ const wildcard = config.commands['*']
74
+
75
+ expect(entry).toEqual(wildcard)
76
+ })
77
+
78
+ it('should give ship command patterns and checklist', async () => {
79
+ const config = await loadCommandContextConfig()
80
+ const entry = resolveCommandContext(config, 'ship')
81
+
82
+ expect(entry.patterns).toBe(true)
83
+ expect(entry.checklist).toBe(true)
84
+ })
85
+
86
+ it('should give done command checklist', async () => {
87
+ const config = await loadCommandContextConfig()
88
+ const entry = resolveCommandContext(config, 'done')
89
+
90
+ expect(entry.checklist).toBe(true)
91
+ })
92
+
93
+ it('should give sync command no context sections', async () => {
94
+ const config = await loadCommandContextConfig()
95
+ const entry = resolveCommandContext(config, 'sync')
96
+
97
+ expect(entry.agents).toBe(false)
98
+ expect(entry.patterns).toBe(false)
99
+ expect(entry.checklist).toBe(false)
100
+ expect(entry.modules).toEqual([])
101
+ })
102
+ })
103
+
104
+ // =============================================================================
105
+ // Full Resolution with Classification
106
+ // =============================================================================
107
+
108
+ describe('resolveCommandContextFull', () => {
109
+ it('should return source=config for known commands', async () => {
110
+ const config = await loadCommandContextConfig()
111
+ const result = resolveCommandContextFull(config, 'bug')
112
+
113
+ expect(result.source).toBe('config')
114
+ expect(result.entry.agents).toBe(true)
115
+ })
116
+
117
+ it('should classify unknown commands from template', async () => {
118
+ const config = await loadCommandContextConfig()
119
+ const template: Template = {
120
+ frontmatter: {
121
+ name: 'p:deploy',
122
+ description: 'Deploy the application to production',
123
+ 'allowed-tools': ['Bash', 'Read'],
124
+ },
125
+ content: 'Build and deploy the project. Verify deployment status.',
126
+ }
127
+
128
+ const result = resolveCommandContextFull(config, 'deploy', template)
129
+ expect(result.source).toBe('classified')
130
+ })
131
+
132
+ it('should return source=cache for previously classified commands', async () => {
133
+ const config = await loadCommandContextConfig()
134
+ const entry: CommandContextEntry = {
135
+ agents: true,
136
+ patterns: false,
137
+ checklist: false,
138
+ modules: [],
139
+ }
140
+ cacheClassification('cached-cmd', entry)
141
+
142
+ const result = resolveCommandContextFull(config, 'cached-cmd')
143
+ expect(result.source).toBe('cache')
144
+ expect(result.entry).toEqual(entry)
145
+ })
146
+
147
+ it('should return source=wildcard when no template provided for unknown command', async () => {
148
+ const config = await loadCommandContextConfig()
149
+ const result = resolveCommandContextFull(config, 'truly-unknown-no-template')
150
+
151
+ expect(result.source).toBe('wildcard')
152
+ })
153
+ })
154
+
155
+ // =============================================================================
156
+ // Command Classifier
157
+ // =============================================================================
158
+
159
+ describe('classifyCommand', () => {
160
+ it('should classify code-modifying commands with Write tool', () => {
161
+ const template: Template = {
162
+ frontmatter: {
163
+ name: 'p:scaffold',
164
+ description: 'Scaffold a new component',
165
+ 'allowed-tools': ['Write', 'Read', 'Bash'],
166
+ },
167
+ content: 'Create the component files and implement the structure.',
168
+ }
169
+
170
+ const result = classifyCommand('scaffold', template)
171
+ expect(result.agents).toBe(true)
172
+ expect(result.patterns).toBe(true)
173
+ })
174
+
175
+ it('should classify info commands as needing no context', () => {
176
+ const template: Template = {
177
+ frontmatter: {
178
+ name: 'p:stats',
179
+ description: 'Show project statistics',
180
+ 'allowed-tools': ['Read'],
181
+ },
182
+ content: 'Display a summary of the project status and metrics.',
183
+ }
184
+
185
+ const result = classifyCommand('stats', template)
186
+ expect(result.agents).toBe(false)
187
+ expect(result.patterns).toBe(false)
188
+ })
189
+
190
+ it('should classify quality commands with checklists', () => {
191
+ const template: Template = {
192
+ frontmatter: {
193
+ name: 'p:verify',
194
+ description: 'Verify project integrity',
195
+ 'allowed-tools': ['Read', 'Bash'],
196
+ },
197
+ content: 'Validate all tests pass and lint checks succeed before release.',
198
+ }
199
+
200
+ const result = classifyCommand('verify', template)
201
+ expect(result.checklist).toBe(true)
202
+ })
203
+ })
204
+
205
+ // =============================================================================
206
+ // Classification Cache
207
+ // =============================================================================
208
+
209
+ describe('Classification Cache', () => {
210
+ it('should cache and retrieve classifications', () => {
211
+ const entry: CommandContextEntry = {
212
+ agents: true,
213
+ patterns: true,
214
+ checklist: false,
215
+ modules: ['test.md'],
216
+ }
217
+ cacheClassification('test-cache', entry)
218
+
219
+ const cached = getCachedClassification('test-cache')
220
+ expect(cached).toEqual(entry)
221
+ })
222
+
223
+ it('should return undefined for uncached commands', () => {
224
+ const cached = getCachedClassification('never-cached')
225
+ expect(cached).toBeUndefined()
226
+ })
227
+ })
228
+
229
+ // =============================================================================
230
+ // Auto-Learn Tracking
231
+ // =============================================================================
232
+
233
+ describe('Auto-Learn (trackClassification)', () => {
234
+ it('should not trigger persist on first classification', () => {
235
+ const entry: CommandContextEntry = {
236
+ agents: true,
237
+ patterns: true,
238
+ checklist: false,
239
+ modules: [],
240
+ }
241
+ const shouldPersist = trackClassification('learn-test-1', entry)
242
+
243
+ expect(shouldPersist).toBe(false)
244
+ })
245
+
246
+ it('should trigger persist after threshold reached', () => {
247
+ const entry: CommandContextEntry = {
248
+ agents: false,
249
+ patterns: true,
250
+ checklist: true,
251
+ modules: [],
252
+ }
253
+
254
+ trackClassification('learn-test-2', entry) // 1
255
+ trackClassification('learn-test-2', entry) // 2
256
+ const shouldPersist = trackClassification('learn-test-2', entry) // 3
257
+
258
+ expect(shouldPersist).toBe(true)
259
+ })
260
+
261
+ it('should reset count when classification changes', () => {
262
+ const entry1: CommandContextEntry = {
263
+ agents: true,
264
+ patterns: true,
265
+ checklist: false,
266
+ modules: [],
267
+ }
268
+ const entry2: CommandContextEntry = {
269
+ agents: false,
270
+ patterns: false,
271
+ checklist: true,
272
+ modules: [],
273
+ }
274
+
275
+ trackClassification('learn-test-3', entry1) // 1
276
+ trackClassification('learn-test-3', entry1) // 2
277
+ const shouldPersist = trackClassification('learn-test-3', entry2) // reset to 1
278
+
279
+ expect(shouldPersist).toBe(false)
280
+ })
281
+ })
@@ -0,0 +1,113 @@
1
+ /**
2
+ * Fibonacci Estimation Module Tests
3
+ */
4
+
5
+ import { describe, expect, it } from 'bun:test'
6
+ import {
7
+ FIBONACCI_POINTS,
8
+ findClosestPoint,
9
+ formatMinutes,
10
+ isValidPoint,
11
+ pointsToMinutes,
12
+ pointsToTimeRange,
13
+ } from '../../domain/fibonacci'
14
+
15
+ describe('Fibonacci Estimation', () => {
16
+ describe('FIBONACCI_POINTS', () => {
17
+ it('should contain the standard Fibonacci sequence', () => {
18
+ expect(FIBONACCI_POINTS).toEqual([1, 2, 3, 5, 8, 13, 21])
19
+ })
20
+ })
21
+
22
+ describe('isValidPoint', () => {
23
+ it('should accept valid Fibonacci points', () => {
24
+ for (const p of FIBONACCI_POINTS) {
25
+ expect(isValidPoint(p)).toBe(true)
26
+ }
27
+ })
28
+
29
+ it('should reject non-Fibonacci numbers', () => {
30
+ expect(isValidPoint(0)).toBe(false)
31
+ expect(isValidPoint(4)).toBe(false)
32
+ expect(isValidPoint(6)).toBe(false)
33
+ expect(isValidPoint(10)).toBe(false)
34
+ expect(isValidPoint(22)).toBe(false)
35
+ })
36
+ })
37
+
38
+ describe('pointsToMinutes', () => {
39
+ it('should return min/max/typical for each point', () => {
40
+ const result = pointsToMinutes(1)
41
+ expect(result).toEqual({ min: 5, max: 15, typical: 10 })
42
+ })
43
+
44
+ it('should scale up with larger points', () => {
45
+ const small = pointsToMinutes(1)
46
+ const large = pointsToMinutes(21)
47
+ expect(large.typical).toBeGreaterThan(small.typical)
48
+ })
49
+
50
+ it('should have increasing typical times across the scale', () => {
51
+ let prev = 0
52
+ for (const p of FIBONACCI_POINTS) {
53
+ const { typical } = pointsToMinutes(p)
54
+ expect(typical).toBeGreaterThan(prev)
55
+ prev = typical
56
+ }
57
+ })
58
+ })
59
+
60
+ describe('formatMinutes', () => {
61
+ it('should format sub-hour durations', () => {
62
+ expect(formatMinutes(30)).toBe('30m')
63
+ expect(formatMinutes(5)).toBe('5m')
64
+ })
65
+
66
+ it('should format exact hours', () => {
67
+ expect(formatMinutes(60)).toBe('1h')
68
+ expect(formatMinutes(120)).toBe('2h')
69
+ })
70
+
71
+ it('should format hours and minutes', () => {
72
+ expect(formatMinutes(90)).toBe('1h 30m')
73
+ expect(formatMinutes(150)).toBe('2h 30m')
74
+ })
75
+ })
76
+
77
+ describe('pointsToTimeRange', () => {
78
+ it('should return formatted range string', () => {
79
+ expect(pointsToTimeRange(1)).toBe('5m–15m')
80
+ expect(pointsToTimeRange(5)).toBe('1h–2h')
81
+ })
82
+ })
83
+
84
+ describe('findClosestPoint', () => {
85
+ it('should find exact matches for typical times', () => {
86
+ expect(findClosestPoint(10)).toBe(1)
87
+ expect(findClosestPoint(20)).toBe(2)
88
+ expect(findClosestPoint(45)).toBe(3)
89
+ expect(findClosestPoint(90)).toBe(5)
90
+ expect(findClosestPoint(180)).toBe(8)
91
+ expect(findClosestPoint(360)).toBe(13)
92
+ expect(findClosestPoint(720)).toBe(21)
93
+ })
94
+
95
+ it('should find closest point for in-between values', () => {
96
+ // 15 minutes is equidistant between 1 (10m) and 2 (20m) — picks first match
97
+ expect(findClosestPoint(15)).toBe(1)
98
+ // 16 minutes is closer to 2 (20m) than 1 (10m)
99
+ expect(findClosestPoint(16)).toBe(2)
100
+ // 35 minutes is closer to 3 (45m) than 2 (20m)
101
+ expect(findClosestPoint(35)).toBe(3)
102
+ })
103
+
104
+ it('should return 1 for very small durations', () => {
105
+ expect(findClosestPoint(1)).toBe(1)
106
+ expect(findClosestPoint(0)).toBe(1)
107
+ })
108
+
109
+ it('should return 21 for very large durations', () => {
110
+ expect(findClosestPoint(1000)).toBe(21)
111
+ })
112
+ })
113
+ })