bedrock-wrapper 2.4.2 → 2.4.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +44 -4
- package/README.md +10 -3
- package/bedrock-models.js +2 -14
- package/bedrock-wrapper.js +20 -1
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/notification.json +51 -0
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/post_tool_use.json +4062 -0
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/pre_tool_use.json +1625 -0
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/stop.json +65 -0
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/subagent_stop.json +9 -0
- package/logs/e0b34b2c-ee9a-4813-893a-82d47d3d5141/user_prompt_submit.json +65 -0
- package/package.json +2 -1
- package/test-stop-sequences.js +276 -0
package/CHANGELOG.md
CHANGED
|
@@ -1,20 +1,60 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
All notable changes to this project will be documented in this file.
|
|
3
3
|
|
|
4
|
+
## [2.4.3] - 2025-07-31 (Stop Sequences Fixes)
|
|
5
|
+
### Fixed
|
|
6
|
+
- **Critical Discovery**: Removed stop sequences support from Llama models
|
|
7
|
+
- AWS Bedrock does not support stop sequences for Llama models (confirmed via official AWS documentation)
|
|
8
|
+
- Llama models only support: `prompt`, `temperature`, `top_p`, `max_gen_len`, `images`
|
|
9
|
+
- This is an AWS Bedrock limitation, not a wrapper limitation
|
|
10
|
+
- Fixed Nova model configuration conflicts that were causing stop sequence inconsistencies
|
|
11
|
+
- Removed conflicting empty `inferenceConfig: {}` from Nova model configurations
|
|
12
|
+
- Improved error handling for empty responses when stop sequences trigger early
|
|
13
|
+
|
|
14
|
+
### Updated
|
|
15
|
+
- **Documentation corrections**
|
|
16
|
+
- Corrected stop sequences support claims (removed "all models support" language)
|
|
17
|
+
- Added accurate model-specific support matrix with sequence limits
|
|
18
|
+
- Added comprehensive stop sequences support table with AWS documentation references
|
|
19
|
+
- **Model Support Matrix** now clearly documented:
|
|
20
|
+
- ✅ Claude models: Full support (up to 8,191 sequences)
|
|
21
|
+
- ✅ Nova models: Full support (up to 4 sequences)
|
|
22
|
+
- ✅ Mistral models: Full support (up to 10 sequences)
|
|
23
|
+
- ❌ Llama models: Not supported (AWS Bedrock limitation)
|
|
24
|
+
|
|
25
|
+
### Technical Details
|
|
26
|
+
- Based on comprehensive research of official AWS Bedrock documentation
|
|
27
|
+
- All changes maintain full backward compatibility
|
|
28
|
+
- Test results show significant improvements in stop sequences reliability for supported models
|
|
29
|
+
- Added detailed explanations to help users understand AWS Bedrock's actual capabilities
|
|
30
|
+
|
|
4
31
|
## [2.4.2] - 2025-07-31 (Stop Sequences Support)
|
|
5
32
|
### Added
|
|
6
|
-
- Stop sequences support for
|
|
33
|
+
- Stop sequences support for compatible models
|
|
7
34
|
- OpenAI-compatible `stop` and `stop_sequences` parameters
|
|
8
35
|
- Automatic string-to-array conversion for compatibility
|
|
9
|
-
- Model-specific parameter mapping (stop_sequences for Claude, stopSequences for Nova, stop for
|
|
36
|
+
- Model-specific parameter mapping (stop_sequences for Claude, stopSequences for Nova, stop for Mistral)
|
|
10
37
|
- Enhanced request building logic to include stop sequences in appropriate API formats
|
|
11
|
-
- Comprehensive stop sequences testing and validation
|
|
38
|
+
- Comprehensive stop sequences testing and validation with `npm run test-stop`
|
|
39
|
+
|
|
40
|
+
### Fixed
|
|
41
|
+
- **Critical Discovery**: Removed stop sequences support from Llama models
|
|
42
|
+
- AWS Bedrock does not support stop sequences for Llama models (confirmed via official documentation)
|
|
43
|
+
- Llama models only support: `prompt`, `temperature`, `top_p`, `max_gen_len`, `images`
|
|
44
|
+
- This is an AWS Bedrock limitation, not a wrapper limitation
|
|
45
|
+
- Fixed Nova model configuration conflicts that were causing stop sequence inconsistencies
|
|
46
|
+
- Improved error handling for empty responses when stop sequences trigger early
|
|
12
47
|
|
|
13
48
|
### Technical Details
|
|
14
|
-
-
|
|
49
|
+
- **Model Support Matrix**:
|
|
50
|
+
- ✅ Claude models: Full support (up to 8,191 sequences)
|
|
51
|
+
- ✅ Nova models: Full support (up to 4 sequences)
|
|
52
|
+
- ✅ Mistral models: Full support (up to 10 sequences)
|
|
53
|
+
- ❌ Llama models: Not supported (AWS Bedrock limitation)
|
|
15
54
|
- Updated request construction for both messages API and prompt-based models
|
|
16
55
|
- Supports both single string and array formats for stop sequences
|
|
17
56
|
- Maintains full backward compatibility with existing API usage
|
|
57
|
+
- Added comprehensive documentation in README.md and CLAUDE.md explaining support limitations
|
|
18
58
|
|
|
19
59
|
## [2.4.0] - 2025-07-24 (AWS Nova Models)
|
|
20
60
|
### Added
|
package/README.md
CHANGED
|
@@ -192,7 +192,7 @@ You can include multiple images in a single message by adding more image_url obj
|
|
|
192
192
|
|
|
193
193
|
### Stop Sequences
|
|
194
194
|
|
|
195
|
-
|
|
195
|
+
Stop sequences are custom text sequences that cause the model to stop generating text. This is useful for controlling where the model stops its response.
|
|
196
196
|
|
|
197
197
|
```javascript
|
|
198
198
|
const openaiChatCompletionsCreateObject = {
|
|
@@ -205,11 +205,16 @@ const openaiChatCompletionsCreateObject = {
|
|
|
205
205
|
};
|
|
206
206
|
```
|
|
207
207
|
|
|
208
|
+
**Model Support:**
|
|
209
|
+
- ✅ **Claude models**: Fully supported (up to 8,191 sequences)
|
|
210
|
+
- ✅ **Nova models**: Fully supported (up to 4 sequences)
|
|
211
|
+
- ✅ **Mistral models**: Fully supported (up to 10 sequences)
|
|
212
|
+
- ❌ **Llama models**: Not supported (AWS Bedrock limitation)
|
|
213
|
+
|
|
208
214
|
**Features:**
|
|
209
215
|
- Compatible with OpenAI's `stop` parameter (single string or array)
|
|
210
216
|
- Also accepts `stop_sequences` parameter for explicit usage
|
|
211
217
|
- Automatic conversion between string and array formats
|
|
212
|
-
- Works with all 26+ supported models (Claude, Nova, Llama, Mistral)
|
|
213
218
|
- Model-specific parameter mapping handled automatically
|
|
214
219
|
|
|
215
220
|
**Example Usage:**
|
|
@@ -217,10 +222,12 @@ const openaiChatCompletionsCreateObject = {
|
|
|
217
222
|
// Stop generation when model tries to output "7"
|
|
218
223
|
const result = await bedrockWrapper(awsCreds, {
|
|
219
224
|
messages: [{ role: "user", content: "Count from 1 to 10" }],
|
|
220
|
-
model: "Claude-3-5-Sonnet",
|
|
225
|
+
model: "Claude-3-5-Sonnet", // Use Claude, Nova, or Mistral models
|
|
221
226
|
stop_sequences: ["7"]
|
|
222
227
|
});
|
|
223
228
|
// Response: "1, 2, 3, 4, 5, 6," (stops before "7")
|
|
229
|
+
|
|
230
|
+
// Note: Llama models will ignore stop sequences due to AWS Bedrock limitations
|
|
224
231
|
```
|
|
225
232
|
|
|
226
233
|
---
|
package/bedrock-models.js
CHANGED
|
@@ -301,7 +301,6 @@ export const bedrock_models = [
|
|
|
301
301
|
"display_role_names": true,
|
|
302
302
|
"max_tokens_param_name": "max_gen_len",
|
|
303
303
|
"max_supported_response_tokens": 2048,
|
|
304
|
-
"stop_sequences_param_name": "stop",
|
|
305
304
|
"response_chunk_element": "generation"
|
|
306
305
|
},
|
|
307
306
|
{
|
|
@@ -330,7 +329,6 @@ export const bedrock_models = [
|
|
|
330
329
|
"display_role_names": true,
|
|
331
330
|
"max_tokens_param_name": "max_gen_len",
|
|
332
331
|
"max_supported_response_tokens": 2048,
|
|
333
|
-
"stop_sequences_param_name": "stop",
|
|
334
332
|
"response_chunk_element": "generation"
|
|
335
333
|
},
|
|
336
334
|
{
|
|
@@ -359,7 +357,6 @@ export const bedrock_models = [
|
|
|
359
357
|
"display_role_names": true,
|
|
360
358
|
"max_tokens_param_name": "max_gen_len",
|
|
361
359
|
"max_supported_response_tokens": 2048,
|
|
362
|
-
"stop_sequences_param_name": "stop",
|
|
363
360
|
"response_chunk_element": "generation"
|
|
364
361
|
},
|
|
365
362
|
{
|
|
@@ -388,7 +385,6 @@ export const bedrock_models = [
|
|
|
388
385
|
"display_role_names": true,
|
|
389
386
|
"max_tokens_param_name": "max_gen_len",
|
|
390
387
|
"max_supported_response_tokens": 2048,
|
|
391
|
-
"stop_sequences_param_name": "stop",
|
|
392
388
|
"response_chunk_element": "generation"
|
|
393
389
|
},
|
|
394
390
|
{
|
|
@@ -417,7 +413,6 @@ export const bedrock_models = [
|
|
|
417
413
|
"display_role_names": true,
|
|
418
414
|
"max_tokens_param_name": "max_gen_len",
|
|
419
415
|
"max_supported_response_tokens": 2048,
|
|
420
|
-
"stop_sequences_param_name": "stop",
|
|
421
416
|
"response_chunk_element": "generation"
|
|
422
417
|
},
|
|
423
418
|
{
|
|
@@ -445,7 +440,6 @@ export const bedrock_models = [
|
|
|
445
440
|
"display_role_names": true,
|
|
446
441
|
"max_tokens_param_name": "max_gen_len",
|
|
447
442
|
"max_supported_response_tokens": 2048,
|
|
448
|
-
"stop_sequences_param_name": "stop",
|
|
449
443
|
"response_chunk_element": "generation"
|
|
450
444
|
},
|
|
451
445
|
{
|
|
@@ -473,7 +467,6 @@ export const bedrock_models = [
|
|
|
473
467
|
"display_role_names": true,
|
|
474
468
|
"max_tokens_param_name": "max_gen_len",
|
|
475
469
|
"max_supported_response_tokens": 2048,
|
|
476
|
-
"stop_sequences_param_name": "stop",
|
|
477
470
|
"response_chunk_element": "generation"
|
|
478
471
|
},
|
|
479
472
|
{
|
|
@@ -501,7 +494,6 @@ export const bedrock_models = [
|
|
|
501
494
|
"display_role_names": true,
|
|
502
495
|
"max_tokens_param_name": "max_gen_len",
|
|
503
496
|
"max_supported_response_tokens": 2048,
|
|
504
|
-
"stop_sequences_param_name": "stop",
|
|
505
497
|
"response_chunk_element": "generation"
|
|
506
498
|
},
|
|
507
499
|
{
|
|
@@ -529,7 +521,6 @@ export const bedrock_models = [
|
|
|
529
521
|
"display_role_names": true,
|
|
530
522
|
"max_tokens_param_name": "max_gen_len",
|
|
531
523
|
"max_supported_response_tokens": 2048,
|
|
532
|
-
"stop_sequences_param_name": "stop",
|
|
533
524
|
"response_chunk_element": "generation"
|
|
534
525
|
},
|
|
535
526
|
{
|
|
@@ -557,7 +548,6 @@ export const bedrock_models = [
|
|
|
557
548
|
"display_role_names": true,
|
|
558
549
|
"max_tokens_param_name": "max_gen_len",
|
|
559
550
|
"max_supported_response_tokens": 2048,
|
|
560
|
-
"stop_sequences_param_name": "stop",
|
|
561
551
|
"response_chunk_element": "generation"
|
|
562
552
|
},
|
|
563
553
|
{
|
|
@@ -576,8 +566,7 @@ export const bedrock_models = [
|
|
|
576
566
|
"response_chunk_element": "contentBlockDelta.delta.text",
|
|
577
567
|
"response_nonchunk_element": "output.message.content[0].text",
|
|
578
568
|
"special_request_schema": {
|
|
579
|
-
"schemaVersion": "messages-v1"
|
|
580
|
-
"inferenceConfig": {}
|
|
569
|
+
"schemaVersion": "messages-v1"
|
|
581
570
|
},
|
|
582
571
|
"image_support": {
|
|
583
572
|
"max_image_size": 5242880, // 5MB per image
|
|
@@ -601,8 +590,7 @@ export const bedrock_models = [
|
|
|
601
590
|
"response_chunk_element": "contentBlockDelta.delta.text",
|
|
602
591
|
"response_nonchunk_element": "output.message.content[0].text",
|
|
603
592
|
"special_request_schema": {
|
|
604
|
-
"schemaVersion": "messages-v1"
|
|
605
|
-
"inferenceConfig": {}
|
|
593
|
+
"schemaVersion": "messages-v1"
|
|
606
594
|
},
|
|
607
595
|
"image_support": {
|
|
608
596
|
"max_image_size": 5242880, // 5MB per image
|
package/bedrock-wrapper.js
CHANGED
|
@@ -406,7 +406,23 @@ export async function* bedrockWrapper(awsCreds, openaiChatCompletionsCreateObjec
|
|
|
406
406
|
}
|
|
407
407
|
}
|
|
408
408
|
|
|
409
|
+
// Handle case where stop sequences cause empty content array
|
|
410
|
+
if (!text_result && decodedBodyResponse.stop_reason === "stop_sequence") {
|
|
411
|
+
// If stopped by sequence but no content, return empty string instead of undefined
|
|
412
|
+
text_result = "";
|
|
413
|
+
}
|
|
414
|
+
|
|
415
|
+
// Ensure text_result is a string to prevent 'undefined' from being part of the response
|
|
416
|
+
if (text_result === null || text_result === undefined) {
|
|
417
|
+
text_result = "";
|
|
418
|
+
}
|
|
419
|
+
|
|
409
420
|
let result = thinking_result ? `<think>${thinking_result}</think>\n\n${text_result}` : text_result;
|
|
421
|
+
|
|
422
|
+
// Ensure final result is a string, in case thinking_result was also empty
|
|
423
|
+
if (result === null || result === undefined) {
|
|
424
|
+
result = "";
|
|
425
|
+
}
|
|
410
426
|
yield result;
|
|
411
427
|
}
|
|
412
428
|
}
|
|
@@ -442,7 +458,10 @@ function findAwsModelWithId(model) {
|
|
|
442
458
|
export async function listBedrockWrapperSupportedModels() {
|
|
443
459
|
let supported_models = [];
|
|
444
460
|
for (let i = 0; i < bedrock_models.length; i++) {
|
|
445
|
-
supported_models.push(
|
|
461
|
+
supported_models.push(JSON.stringify({
|
|
462
|
+
modelName: bedrock_models[i].modelName,
|
|
463
|
+
modelId: bedrock_models[i].modelId
|
|
464
|
+
}));
|
|
446
465
|
}
|
|
447
466
|
return supported_models;
|
|
448
467
|
}
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
[
|
|
2
|
+
{
|
|
3
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
4
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
5
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
6
|
+
"hook_event_name": "Notification",
|
|
7
|
+
"message": "Claude is waiting for your input"
|
|
8
|
+
},
|
|
9
|
+
{
|
|
10
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
11
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
12
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
13
|
+
"hook_event_name": "Notification",
|
|
14
|
+
"message": "Claude is waiting for your input"
|
|
15
|
+
},
|
|
16
|
+
{
|
|
17
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
18
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
19
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
20
|
+
"hook_event_name": "Notification",
|
|
21
|
+
"message": "Claude is waiting for your input"
|
|
22
|
+
},
|
|
23
|
+
{
|
|
24
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
25
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
26
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
27
|
+
"hook_event_name": "Notification",
|
|
28
|
+
"message": "Claude is waiting for your input"
|
|
29
|
+
},
|
|
30
|
+
{
|
|
31
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
32
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
33
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
34
|
+
"hook_event_name": "Notification",
|
|
35
|
+
"message": "Claude is waiting for your input"
|
|
36
|
+
},
|
|
37
|
+
{
|
|
38
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
39
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
40
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
41
|
+
"hook_event_name": "Notification",
|
|
42
|
+
"message": "Claude is waiting for your input"
|
|
43
|
+
},
|
|
44
|
+
{
|
|
45
|
+
"session_id": "e0b34b2c-ee9a-4813-893a-82d47d3d5141",
|
|
46
|
+
"transcript_path": "C:\\Users\\Justin.Parker\\.claude\\projects\\C--git-bedrock-wrapper\\e0b34b2c-ee9a-4813-893a-82d47d3d5141.jsonl",
|
|
47
|
+
"cwd": "C:\\git\\bedrock-wrapper",
|
|
48
|
+
"hook_event_name": "Notification",
|
|
49
|
+
"message": "Claude is waiting for your input"
|
|
50
|
+
}
|
|
51
|
+
]
|