sysml-v2-lsp 0.13.0 → 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -118,6 +118,11 @@ sysml-v2-lsp/
118
118
  │ └── mcp/ # Mermaid diagram generator
119
119
  ├── grammar/ # ANTLR4 grammar files (.g4)
120
120
  ├── sysml.library/ # SysML v2 standard library
121
+ ├── benchmarks/ # Performance benchmark suite
122
+ │ ├── src/ # Runner, suites, reporters, utilities
123
+ │ ├── baselines/ # Saved baseline for regression detection
124
+ │ ├── results/ # JSON + Markdown output per run
125
+ │ └── fixtures/ # Synthetic .sysml files for benchmarking
121
126
  ├── examples/ # Example .sysml models
122
127
  ├── test/ # Unit tests (vitest)
123
128
  └── package.json # Extension manifest + monorepo scripts
@@ -139,6 +144,59 @@ make update-grammar # Pull latest grammar, rebuild parser + DFA snapshot
139
144
  make update-library # Pull latest SysML v2 standard library
140
145
  make dfa # Regenerate DFA snapshot (after any grammar change)
141
146
  make ci # Full CI pipeline (lint + build + test)
147
+ npm run bench # Run all benchmark suites
148
+ npm run bench:baseline # Save benchmark baseline
149
+ npm run bench:regression # Compare against baseline
150
+ ```
151
+
152
+ ## Benchmarks
153
+
154
+ A built-in benchmark suite measures parser, symbol table, LSP provider, memory, throughput, and folder-load performance. Results are written as both JSON and Markdown to `benchmarks/results/`.
155
+
156
+ ### Running Benchmarks
157
+
158
+ ```bash
159
+ npm run bench # run all suites
160
+ npm run bench:parse # parse suite only
161
+ npm run bench:providers # LSP providers suite only
162
+ ```
163
+
164
+ Or use the runner directly for full control:
165
+
166
+ ```bash
167
+ npx tsx benchmarks/src/runner.ts --suite parse --suite symbolTable
168
+ npx tsx benchmarks/src/runner.ts --runs 10 --warmup 3
169
+ npx tsx benchmarks/src/runner.ts --output ./my-results
170
+ ```
171
+
172
+ ### Suites
173
+
174
+ | Suite | What it measures |
175
+ | ------------- | ---------------------------------------------------------------------- |
176
+ | `parse` | ANTLR4 parse time — cold (no DFA) vs warm (DFA snapshot pre-loaded) |
177
+ | `symbolTable` | Symbol table build and lookup latency |
178
+ | `providers` | LSP features: diagnostics, hover, completion, references, rename, etc. |
179
+ | `memory` | Heap allocation per file and scaling behaviour |
180
+ | `throughput` | End-to-end lines/sec and tokens/sec across all example files |
181
+ | `folderLoad` | Full folder parse + symbol build (examples, standard library, all) |
182
+
183
+ ### Regression Detection
184
+
185
+ Save a baseline, then compare future runs against it:
186
+
187
+ ```bash
188
+ npm run bench:baseline # save current results as baseline
189
+ npm run bench:regression # compare against baseline, exit 1 on regression
190
+ ```
191
+
192
+ The default regression threshold is 20%. Override with `--threshold <n>`.
193
+
194
+ ### Viewing Results
195
+
196
+ Each run produces a JSON file and a Markdown report in `benchmarks/results/`. To convert an existing JSON result to Markdown:
197
+
198
+ ```bash
199
+ npx tsx benchmarks/src/reporters/markdownReporter.ts benchmarks/results/<file>.json
142
200
  ```
143
201
 
144
202
  ## Grammar Updates