@crownpeak/dqm-react-component-dev-mcp 1.2.4 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/data/CHANGELOG.md CHANGED
@@ -5,6 +5,42 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.3.0] - 2026-01-09
9
+
10
+ ### Added
11
+ - **GPT-5.2 Support**: Default AI model upgraded from `gpt-4.1-mini` to `gpt-5.2`
12
+ - 1M token context window enables larger translation batches and fewer API calls
13
+ - New `reasoning_effort` parameter (`low`/`medium`/`high`) for GPT-5 models
14
+ - Automatic API adaptation for GPT-5 vs GPT-4 differences:
15
+ - Token parameter: `max_completion_tokens` (GPT-5) vs `max_tokens` (GPT-4)
16
+ - Structured output: `json_schema` (GPT-5) vs `json_object` (GPT-4)
17
+ - System role: `developer` (GPT-5) vs `system` (GPT-4)
18
+ - 50,000 token context budget (vs 12,000 for GPT-4o)
19
+ - **Model Capabilities Module**: New `src/utils/modelCapabilities.ts` for centralized model detection
20
+ - `isGPT5Model()`, `isReasoningModel()` utility functions
21
+ - `getModelCapabilities()` returns full capability info per model
22
+ - `buildTokenParams()`, `buildReasoningParams()`, `getSystemRole()` helpers
23
+ - **Reasoning Effort UI**: New setting in AI Settings dialog (only visible for GPT-5 models)
24
+ - Persisted to `localStorage` as `dqm_reasoning_effort`
25
+ - Full i18n support (EN/DE/ES)
26
+ - **MCP Server Feature**: Added MCP Server section to website features
27
+ - New feature card with link to documentation
28
+ - Translations for all 3 languages
29
+
30
+ ### Changed
31
+ - **AI Settings Dialog**: GPT-5.2 now shown with "NEW" badge in model selector
32
+ - **AI Summary Card**: Improved layout with column flex direction for better readability
33
+ - **Website Hero Badge**: Updated from version number to "GPT-5.2 Powered AI"
34
+ - **Website MSW Provider**: Refactored to `useMSW()` hook for better lifecycle control
35
+ - MSW now starts only when sidebar opens
36
+ - MSW resets completely when sidebar closes
37
+ - **Footer Links**: Updated Crownpeak DQM Platform URL to German FirstSpirit product page
38
+
39
+ ### Fixed
40
+ - **Translation Feature Descriptions**: Updated to reflect GPT-5.2 capabilities
41
+ - **Website Feature Cards**: Added MCP Server feature with external documentation link
42
+ - **Accessibility**: Added `aria-label` to MCP Server feature link
43
+
8
44
  ## [1.2.4] - 2026-01-08
9
45
 
10
46
  ### Added
@@ -69,10 +69,46 @@ flowchart TD
69
69
  ### OpenAI Backend
70
70
 
71
71
  **Models Supported**:
72
- - `gpt-4o-mini` (Recommended, fast & cheap)
73
- - `gpt-4o` (Higher quality)
74
- - `gpt-4.1-mini`
75
- - `gpt-4.1`
72
+
73
+ | Model | Context Window | Recommended For |
74
+ |-------|----------------|-----------------|
75
+ | `gpt-5.2` | 1M tokens | 🌟 Default – Best performance & quality |
76
+ | `gpt-4.1-mini` | 1M tokens | Cost-effective alternative |
77
+ | `gpt-4.1` | 1M tokens | High quality GPT-4 |
78
+ | `gpt-4o-mini` | 128K tokens | Fast & cheap (legacy) |
79
+ | `gpt-4o` | 128K tokens | Higher quality (legacy) |
80
+
81
+ > **🚀 GPT-5.2** is the new default model with 1M token context window, enabling much larger translation batches and better quality.
82
+
83
+ #### GPT-5 vs GPT-4 Differences
84
+
85
+ The component automatically adapts to API differences between GPT-5 and GPT-4 models:
86
+
87
+ | Feature | GPT-5.x | GPT-4.x |
88
+ |---------|---------|---------|
89
+ | Token Parameter | `max_completion_tokens` | `max_tokens` |
90
+ | Structured Output | `json_schema` | `json_object` |
91
+ | System Role | `developer` | `system` |
92
+ | Reasoning Effort | ✅ Supported | ❌ Not available |
93
+ | Context Budget | 50,000 tokens | 12,000 tokens |
94
+
95
+ #### Reasoning Effort (GPT-5 Only)
96
+
97
+ GPT-5 models support a **Reasoning Effort** parameter that controls how deeply the model analyzes translations:
98
+
99
+ | Level | Description | Best For |
100
+ |-------|-------------|----------|
101
+ | `low` | Fast, straightforward translations | Simple content, high volume |
102
+ | `medium` | Balanced analysis (default) | Most use cases |
103
+ | `high` | Deep reasoning, highest accuracy | Technical content, quality-critical |
104
+
105
+ Configure via the **AI Settings** dialog or localStorage:
106
+
107
+ ```typescript
108
+ localStorage.setItem('dqm_reasoning_effort', 'medium'); // 'low' | 'medium' | 'high'
109
+ ```
110
+
111
+ > **Note:** Reasoning Effort is only visible in the UI when a GPT-5 model is selected.
76
112
 
77
113
  **Setup**:
78
114
 
@@ -284,6 +320,8 @@ graph TD
284
320
  - **Tiny**: 20+ checkpoints → Top 5 most critical only
285
321
  - **Fail**: > 50 checkpoints → Skip summary (too large)
286
322
 
323
+ > **💡 GPT-5 Advantage**: With GPT-5.2's 1M token context window, the component uses a 50,000 token context budget (vs 12,000 for GPT-4o), enabling larger batches and fewer API calls.
324
+
287
325
  **Prompt Template**:
288
326
 
289
327
  ```typescript
@@ -397,7 +435,8 @@ The AI features store configuration in `localStorage`:
397
435
  | `dqm_translate_results_mode` | `'fast' \| 'full'` | `'fast'` | Translation mode |
398
436
  | `dqm_ai_summary_enabled` | `'true' \| 'false'` | `'true'` | Summary toggle |
399
437
  | `dqm_openai_apiKey` | `string` | `null` | OpenAI API key |
400
- | `dqm_openai_model` | `string` | `'gpt-4.1-mini'` | OpenAI model name |
438
+ | `dqm_openai_model` | `string` | `'gpt-5.2'` | OpenAI model name |
439
+ | `dqm_reasoning_effort` | `'low' \| 'medium' \| 'high'` | `'medium'` | GPT-5 reasoning depth |
401
440
  | `dqm_openai_baseUrl` | `string` | `'https://api.openai.com/v1'` | OpenAI base URL |
402
441
 
403
442
  **Access Pattern**:
@@ -439,6 +478,24 @@ setLocalStorageItem('dqm_openai_apiKey', 'sk-...');
439
478
 
440
479
  ### Cost Estimation (OpenAI)
441
480
 
481
+ #### GPT-5.2 (Default)
482
+
483
+ Based on `gpt-5.2` pricing (~$0.10/1M input tokens, ~$0.40/1M output tokens):
484
+
485
+ | Operation | Average Tokens | Cost per Call |
486
+ |-----------|----------------|---------------|
487
+ | Translate 1 checkpoint | ~200 input, ~100 output | ~$0.00006 |
488
+ | Translate 50 checkpoints | ~10k input, ~5k output | ~$0.003 |
489
+ | Summary (5 issues) | ~500 input, ~200 output | ~$0.00013 |
490
+ | Summary (20 issues) | ~2k input, ~500 output | ~$0.0004 |
491
+
492
+ **Monthly costs** (assuming 1000 analyses/month with 20 checkpoints each):
493
+ - Translation only: ~$3.00/month
494
+ - Summary only: ~$0.40/month
495
+ - Both: ~$3.40/month
496
+
497
+ #### GPT-4o-mini (Legacy)
498
+
442
499
  Based on `gpt-4o-mini` pricing (~$0.15/1M input tokens, ~$0.60/1M output tokens):
443
500
 
444
501
  | Operation | Average Tokens | Cost per Call |
@@ -334,7 +334,7 @@ function MyComponent() {
334
334
  const engine = useAIEngine({
335
335
  enabled: true,
336
336
  openAiApiKey: 'sk-...',
337
- openAiModel: 'gpt-4.1-mini', // Optional, default: 'gpt-4.1-mini'
337
+ openAiModel: 'gpt-5.2', // Optional, default: 'gpt-5.2'
338
338
  openAiBaseUrl: 'https://api.openai.com/v1', // Optional
339
339
  });
340
340
  }
@@ -346,7 +346,7 @@ function MyComponent() {
346
346
  |----------|------|----------|-------------|
347
347
  | `enabled` | `boolean` | ✅ | Whether AI features are enabled |
348
348
  | `openAiApiKey` | `string` | ❌ | OpenAI API key |
349
- | `openAiModel` | `string` | ❌ | OpenAI model name (default: 'gpt-4.1-mini') |
349
+ | `openAiModel` | `string` | ❌ | OpenAI model name (default: 'gpt-5.2') |
350
350
  | `openAiBaseUrl` | `string` | ❌ | OpenAI base URL (default: 'https://api.openai.com/v1') |
351
351
 
352
352
  ##### Returns (UseAIEngineReturn)
@@ -386,7 +386,7 @@ function MyComponent() {
386
386
  cacheManager,
387
387
  originalData: analysisData,
388
388
  targetLang: 'de',
389
- modelId: 'gpt-4.1-mini',
389
+ modelId: 'gpt-5.2',
390
390
  enabled: true,
391
391
  mode: 'fast',
392
392
  computeBudgetMs: 15000,
@@ -466,7 +466,7 @@ function MyComponent() {
466
466
  engine,
467
467
  originalData: analysisData,
468
468
  targetLang: 'de',
469
- modelId: 'gpt-4.1-mini',
469
+ modelId: 'gpt-5.2',
470
470
  enabled: true,
471
471
  cache: cacheManager.cache,
472
472
  });
@@ -988,7 +988,7 @@ The DQM component uses localStorage for persisting user preferences and authenti
988
988
  | Key | Type | Description |
989
989
  |-----|------|-------------|
990
990
  | `dqm_openai_apiKey` | `string` | OpenAI API key for translation/summary |
991
- | `dqm_openai_model` | `string` | OpenAI model (default: 'gpt-4.1-mini') |
991
+ | `dqm_openai_model` | `string` | OpenAI model (default: 'gpt-5.2') |
992
992
  | `dqm_target_language` | `string` | Target language for translation (ISO 639-1) |
993
993
  | `dqm_translate_results_enabled` | `'true' \| 'false'` | Translation feature enabled |
994
994
  | `dqm_ai_summary_enabled` | `'true' \| 'false'` | AI summary feature enabled |
@@ -1007,7 +1007,7 @@ The DQM component uses localStorage for persisting user preferences and authenti
1007
1007
  ```typescript
1008
1008
  // Set OpenAI configuration before loading DQM
1009
1009
  localStorage.setItem('dqm_openai_apiKey', 'sk-...');
1010
- localStorage.setItem('dqm_openai_model', 'gpt-4o-mini');
1010
+ localStorage.setItem('dqm_openai_model', 'gpt-5.2');
1011
1011
  localStorage.setItem('dqm_target_language', 'de');
1012
1012
  localStorage.setItem('dqm_translate_results_enabled', 'true');
1013
1013
  ```
@@ -419,11 +419,11 @@ export default App;
419
419
  ```
420
420
 
421
421
  **Key Features:**
422
- - **Backend:** OpenAI (gpt-4o-mini, gpt-4o, gpt-4.1)
422
+ - **Backend:** OpenAI (gpt-5.2, gpt-4o, gpt-4.1)
423
423
  - **Performance:** ~2-5s for 50 checkpoints (batch processing with 3 concurrent requests)
424
424
  - **Caching:** Automatic IndexedDB + In-Memory caching (FNV-1a hash-based)
425
425
  - **Mode:** `fast` (15s timeout, fails gracefully) or `full` (120s timeout, complete translation)
426
- - **Cost:** ~$0.001-0.003 per checkpoint (gpt-4o-mini)
426
+ - **Cost:** ~$0.001-0.003 per checkpoint (gpt-5.2)
427
427
 
428
428
  ### Example 7: AI Summary Generation
429
429
 
@@ -462,7 +462,7 @@ export default App;
462
462
  ```
463
463
 
464
464
  **Key Features:**
465
- - **Backend:** OpenAI (gpt-4o-mini, gpt-4o, gpt-4.1)
465
+ - **Backend:** OpenAI (gpt-5.2, gpt-4o, gpt-4.1)
466
466
  - **Performance:** ~3-8s for 50 checkpoints
467
467
  - **Chunking:** Automatic chunking based on token count (single/chunk/tiny strategies)
468
468
  - **Caching:** Persistent IndexedDB cache (reuses summaries across sessions)
@@ -525,7 +525,7 @@ export default App;
525
525
  **Performance Tips:**
526
526
  - Use `mode: 'fast'` for translation if you prioritize speed over completeness
527
527
  - Enable caching to avoid re-translation on subsequent analyses
528
- - Use gpt-4o-mini for cost efficiency, gpt-4o for best quality
528
+ - Use gpt-5.2 for optimal balance of speed, cost, and quality
529
529
 
530
530
  ### Example 9: AI Settings UI (Advanced)
531
531
 
@@ -608,7 +608,7 @@ export default App;
608
608
  **UI Features:**
609
609
  - **Settings Dialog:** Click gear icon in sidebar header to open AI settings
610
610
  - **Real-time Toggle:** Enable/disable translation and summary without reloading
611
- - **Model Selection:** Choose between OpenAI models (gpt-4o-mini, gpt-4o, gpt-4.1)
611
+ - **Model Selection:** Choose between OpenAI models (gpt-5.2, gpt-4o, gpt-4.1)
612
612
  - **Language Selection:** Change target language on the fly
613
613
  - **Persistent Settings:** All settings saved to localStorage
614
614
 
@@ -122,7 +122,7 @@ config={{
122
122
  translation: {
123
123
  enabled: true, // MUST be true
124
124
  apiKey: 'sk-...', // Valid OpenAI API key
125
- model: 'gpt-4o-mini', // Correct model name
125
+ model: 'gpt-5.2', // Correct model name
126
126
  targetLanguage: 'de', // ISO 639-1 code
127
127
  mode: 'fast',
128
128
  },
@@ -144,14 +144,14 @@ console.log('OpenAI API Key Set:', !!localStorage.getItem('dqm_openai_apiKey'));
144
144
  **A:** Try these optimizations:
145
145
 
146
146
  1. **Switch to Fast Mode**: `mode: 'fast'` (15s timeout vs 120s for `'full'`)
147
- 2. **Use a faster model**: `gpt-4o-mini` is optimized for speed
147
+ 2. **Use a faster model**: `gpt-5.2` is optimized for speed and cost-efficiency
148
148
 
149
149
  ```typescript
150
150
  // Optimize for speed
151
151
  translation: {
152
152
  enabled: true,
153
153
  apiKey: 'sk-...',
154
- model: 'gpt-4o-mini', // Fast + cheap
154
+ model: 'gpt-5.2', // Fast + cheap
155
155
  targetLanguage: 'de',
156
156
  mode: 'fast', // 15s timeout
157
157
  }
@@ -164,7 +164,7 @@ translation: {
164
164
  **A:** Check these:
165
165
 
166
166
  1. **OpenAI API Key**: Summary requires valid OpenAI key
167
- 2. **Model**: Ensure model supports JSON mode (`gpt-4o-mini`, `gpt-4o`, `gpt-4.1`)
167
+ 2. **Model**: Ensure model supports JSON mode (`gpt-5.2`, `gpt-4o`, `gpt-4.1`)
168
168
  3. **Timeout**: Increase timeout if analysis has many checkpoints
169
169
 
170
170
  ```typescript
package/data/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@crownpeak/dqm-react-component",
3
- "version": "1.2.4",
3
+ "version": "1.3.0",
4
4
  "private": false,
5
5
  "description": "A React component for Crownpeak Digital Quality Management (DQM) integration",
6
6
  "type": "module",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@crownpeak/dqm-react-component-website",
3
- "version": "1.2.4",
3
+ "version": "1.3.0",
4
4
  "private": true,
5
5
  "scripts": {
6
6
  "dev": "next dev",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@crownpeak/dqm-react-component-dev-mcp",
3
- "version": "1.2.4",
3
+ "version": "1.3.0",
4
4
  "description": "MCP Server for Crownpeak DQM React Component documentation - powered by Probe",
5
5
  "author": "Crownpeak Technology GmbH",
6
6
  "license": "MIT",