@juspay/neurolink 9.23.0 → 9.24.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,9 @@
1
+ ## [9.24.0](https://github.com/juspay/neurolink/compare/v9.23.0...v9.24.0) (2026-03-14)
2
+
3
+ ### Features
4
+
5
+ - **(ppt):** Implement CLI support for PPT Gen ([83e6847](https://github.com/juspay/neurolink/commit/83e684781b04562970bcd48f617d368d1c4db2ee))
6
+
1
7
  ## [9.23.0](https://github.com/juspay/neurolink/compare/v9.22.3...v9.23.0) (2026-03-14)
2
8
 
3
9
  ### Features
package/README.md CHANGED
@@ -37,19 +37,15 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
37
37
 
38
38
  ## What's New (Q1 2026)
39
39
 
40
- | Feature | Version | Description | Guide |
41
- | ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
42
- | **Memory** | v9.12.0 | Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends. | [Memory Guide](docs/features/memory.md) |
43
- | **Context Window Management** | v9.2.0 | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation | [Context Compaction Guide](docs/features/context-compaction.md) |
44
- | **Tool Execution Control** | v9.3.0 | `prepareStep` and `toolChoice` support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. | [API Reference](docs/api/type-aliases/GenerateOptions.md#preparestep) |
45
- | **File Processor System** | v9.1.0 | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection | [File Processors Guide](docs/features/file-processors.md) |
46
- | **RAG with generate()/stream()** | v9.2.0 | Pass `rag: { files }` to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking. | [RAG Guide](docs/features/rag.md) |
47
- | **External TracerProvider Support** | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. | [Observability Guide](docs/features/observability.md) |
48
- | **Server Adapters** | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | [Server Adapters Guide](docs/guides/server-adapters/index.md) |
49
- | **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |
50
- | **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |
51
- | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
52
- | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
40
+ | Feature | Version | Description | Guide |
41
+ | ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
42
+ | **External TracerProvider Support** | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. | [Observability Guide](docs/features/observability.md) |
43
+ | **Server Adapters** | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | [Server Adapters Guide](docs/guides/server-adapters/index.md) |
44
+ | **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |
45
+ | **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |
46
+ | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
47
+ | **RAG with generate()/stream()** | v9.2.0 | Pass `rag: { files }` to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking. | [RAG Guide](docs/features/rag.md) |
48
+ | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
53
49
 
54
50
  - **Memory** – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each `generate()`/`stream()` call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → [Memory Guide](docs/features/memory.md)
55
51
  - **External TracerProvider Support** – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → [Observability Guide](docs/features/observability.md)
@@ -57,6 +53,7 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
57
53
  - **Title Generation Events** – Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. → [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)
58
54
  - **Custom Title Prompts** – Customize conversation title generation with `NEUROLINK_TITLE_PROMPT` environment variable. Use `${userMessage}` placeholder for dynamic prompts. → [Conversation Memory Guide](docs/conversation-memory.md#customizing-the-title-prompt)
59
55
  - **Video Generation** – Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. → [Video Generation Guide](docs/features/video-generation.md)
56
+ - **PPT Generation** – Create professional PowerPoint presentations from text prompts with 35 slide types (title, content, charts, timelines, dashboards, composite layouts), 5 themes, and optional AI-generated images. Works with Vertex AI, OpenAI, Anthropic, Google AI, Azure, and Bedrock. → [PPT Generation Guide](docs/features/ppt-generation.md)
60
57
  - **Image Generation** – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → [Image Generation Guide](docs/image-generation-streaming.md)
61
58
  - **RAG with generate()/stream()** – Just pass `rag: { files: ["./docs/guide.md"] }` to `generate()` or `stream()`. NeuroLink auto-chunks, embeds, and creates a search tool the AI can invoke. 10 chunking strategies, hybrid search, 5 reranker types. → [RAG Guide](docs/features/rag.md)
62
59
  - **HTTP/Streamable HTTP Transport for MCP** – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → [HTTP Transport Guide](docs/mcp-http-transport.md)
@@ -28,11 +28,21 @@ export declare class CLICommandFactory {
28
28
  * Auto-configures provider, model, and tools settings for video generation
29
29
  */
30
30
  private static configureVideoMode;
31
+ /**
32
+ * Helper method to configure options for PPT generation mode
33
+ * Auto-configures provider, model, and tools settings for presentation generation
34
+ */
35
+ private static configurePPTMode;
31
36
  /**
32
37
  * Helper method to handle video file output
33
38
  * Saves generated video to file when --videoOutput flag is provided
34
39
  */
35
40
  private static handleVideoOutput;
41
+ /**
42
+ * Helper method to handle PPT file output
43
+ * Displays PPT generation result info
44
+ */
45
+ private static handlePPTOutput;
36
46
  private static isValidTokenUsage;
37
47
  private static normalizeTokenUsage;
38
48
  private static formatAnalyticsForTextMode;
@@ -111,6 +121,30 @@ export declare class CLICommandFactory {
111
121
  * Execute provider status command
112
122
  */
113
123
  private static executeProviderStatus;
124
+ /**
125
+ * Handle stdin input for generate command
126
+ */
127
+ private static handleGenerateStdinInput;
128
+ /**
129
+ * Detect output mode (video, ppt, or text) based on CLI arguments
130
+ */
131
+ private static detectGenerateOutputMode;
132
+ /**
133
+ * Process context for generation command
134
+ */
135
+ private static processGenerateContext;
136
+ /**
137
+ * Build multimodal input from CLI arguments
138
+ */
139
+ private static buildGenerateMultimodalInput;
140
+ /**
141
+ * Build output configuration for generate request
142
+ */
143
+ private static buildGenerateOutputConfig;
144
+ /**
145
+ * Handle successful generation result
146
+ */
147
+ private static handleGenerateSuccess;
114
148
  /**
115
149
  * Execute the generate command
116
150
  */
@@ -290,9 +290,9 @@ export class CLICommandFactory {
290
290
  // Video Generation options (Veo 3.1)
291
291
  outputMode: {
292
292
  type: "string",
293
- choices: ["text", "video"],
293
+ choices: ["text", "video", "ppt"],
294
294
  default: "text",
295
- description: "Output mode: 'text' for standard generation, 'video' for video generation",
295
+ description: "Output mode: 'text' for standard generation, 'video' for video, 'ppt' for presentation",
296
296
  },
297
297
  videoOutput: {
298
298
  type: "string",
@@ -322,6 +322,42 @@ export class CLICommandFactory {
322
322
  default: true,
323
323
  description: "Enable/disable audio generation in video",
324
324
  },
325
+ // PPT Generation options
326
+ pptPages: {
327
+ type: "number",
328
+ alias: "pages",
329
+ description: "Number of slides to generate (5-50, default: 10 when PPT mode is enabled)",
330
+ },
331
+ pptTheme: {
332
+ type: "string",
333
+ choices: ["modern", "corporate", "creative", "minimal", "dark"],
334
+ description: "Presentation theme/style (default: AI selects based on topic)",
335
+ },
336
+ pptAudience: {
337
+ type: "string",
338
+ choices: ["business", "students", "technical", "general"],
339
+ description: "Target audience (default: AI selects based on topic)",
340
+ },
341
+ pptTone: {
342
+ type: "string",
343
+ choices: ["professional", "casual", "educational", "persuasive"],
344
+ description: "Presentation tone (default: AI selects based on topic)",
345
+ },
346
+ pptOutput: {
347
+ type: "string",
348
+ alias: "po",
349
+ description: "Path to save generated PPTX file (e.g., ./output.pptx)",
350
+ },
351
+ pptAspectRatio: {
352
+ type: "string",
353
+ choices: ["16:9", "4:3"],
354
+ description: "Slide aspect ratio (default: 16:9 when PPT mode is enabled)",
355
+ },
356
+ pptNoImages: {
357
+ type: "boolean",
358
+ default: false,
359
+ description: "Disable AI image generation for slides",
360
+ },
325
361
  thinking: {
326
362
  alias: "think",
327
363
  type: "boolean",
@@ -528,6 +564,14 @@ export class CLICommandFactory {
528
564
  videoLength: argv.videoLength,
529
565
  videoAspectRatio: argv.videoAspectRatio,
530
566
  videoAudio: argv.videoAudio,
567
+ // PPT generation options
568
+ pptPages: argv.pptPages,
569
+ pptTheme: argv.pptTheme,
570
+ pptAudience: argv.pptAudience,
571
+ pptTone: argv.pptTone,
572
+ pptOutput: argv.pptOutput,
573
+ pptAspectRatio: argv.pptAspectRatio,
574
+ pptNoImages: argv.pptNoImages,
531
575
  // Extended thinking options for Claude and Gemini models
532
576
  thinking: argv.thinking,
533
577
  thinkingBudget: argv.thinkingBudget,
@@ -749,6 +793,50 @@ export class CLICommandFactory {
749
793
  });
750
794
  }
751
795
  }
796
+ /**
797
+ * Helper method to configure options for PPT generation mode
798
+ * Auto-configures provider, model, and tools settings for presentation generation
799
+ */
800
+ static configurePPTMode(enhancedOptions, argv, options) {
801
+ const userEnabledTools = !argv.disableTools; // Tools are enabled by default
802
+ enhancedOptions.disableTools = true;
803
+ // Auto-set provider for PPT generation if not explicitly specified
804
+ // PPT works best with Vertex or Google AI for content planning
805
+ if (!enhancedOptions.provider) {
806
+ enhancedOptions.provider = "vertex";
807
+ if (options.debug) {
808
+ logger.debug("Auto-setting provider to 'vertex' for PPT generation mode");
809
+ }
810
+ }
811
+ // Auto-set model if not explicitly specified
812
+ if (!enhancedOptions.model) {
813
+ // Use gemini-2.5-flash for fast, high-quality content planning
814
+ const modelAlias = "gemini-2.5-flash";
815
+ const resolvedModel = ModelResolver.resolveModel(modelAlias);
816
+ const fullModelId = resolvedModel?.id || "gemini-2.5-flash-001";
817
+ enhancedOptions.model = fullModelId;
818
+ if (options.debug) {
819
+ logger.debug(`Auto-setting model to '${fullModelId}' for PPT generation mode`);
820
+ }
821
+ }
822
+ // Warn user if they explicitly enabled tools
823
+ if (userEnabledTools && !options.quiet) {
824
+ logger.always(chalk.yellow("⚠️ Note: MCP tools are not supported in PPT generation mode and have been disabled."));
825
+ }
826
+ if (options.debug) {
827
+ logger.debug("PPT generation mode enabled (tools auto-disabled):", {
828
+ provider: enhancedOptions.provider,
829
+ model: enhancedOptions.model,
830
+ pages: enhancedOptions.pptPages,
831
+ theme: enhancedOptions.pptTheme,
832
+ audience: enhancedOptions.pptAudience,
833
+ tone: enhancedOptions.pptTone,
834
+ aspectRatio: enhancedOptions.pptAspectRatio,
835
+ noImages: enhancedOptions.pptNoImages,
836
+ outputPath: enhancedOptions.pptOutput,
837
+ });
838
+ }
839
+ }
752
840
  /**
753
841
  * Helper method to handle video file output
754
842
  * Saves generated video to file when --videoOutput flag is provided
@@ -775,14 +863,11 @@ export class CLICommandFactory {
775
863
  // Save video to file
776
864
  const saveResult = await saveVideoToFile(video, videoOutputPath);
777
865
  if (saveResult.success) {
778
- if (!options.quiet) {
779
- // Format video info output
780
- const sizeInfo = formatVideoFileSize(saveResult.size);
781
- const metadataSummary = getVideoMetadataSummary(video);
782
- logger.always(chalk.green(`🎬 Video saved to: ${saveResult.path} (${sizeInfo})`));
783
- if (metadataSummary) {
784
- logger.always(chalk.gray(` ${metadataSummary}`));
785
- }
866
+ const sizeInfo = formatVideoFileSize(saveResult.size);
867
+ const metadataSummary = getVideoMetadataSummary(video);
868
+ logger.always(chalk.green(`🎬 Video saved to: ${saveResult.path} (${sizeInfo})`));
869
+ if (!options.quiet && metadataSummary) {
870
+ logger.always(chalk.gray(` ${metadataSummary}`));
786
871
  }
787
872
  }
788
873
  else {
@@ -793,6 +878,52 @@ export class CLICommandFactory {
793
878
  handleError(error, "Video Output");
794
879
  }
795
880
  }
881
+ /**
882
+ * Helper method to handle PPT file output
883
+ * Displays PPT generation result info
884
+ */
885
+ static async handlePPTOutput(result, options) {
886
+ // Extract PPT from result with proper type checking
887
+ if (!result || typeof result !== "object") {
888
+ return;
889
+ }
890
+ const generateResult = result;
891
+ const ppt = generateResult.ppt;
892
+ if (!ppt) {
893
+ // PPT not in result - either not PPT mode or generation failed
894
+ return;
895
+ }
896
+ try {
897
+ if (options.quiet) {
898
+ if (ppt.filePath) {
899
+ logger.always(chalk.green(`📊 Presentation saved to: ${ppt.filePath}`));
900
+ }
901
+ else {
902
+ logger.always(chalk.green("📊 Presentation generated successfully."));
903
+ }
904
+ if (ppt.totalSlides) {
905
+ logger.always(chalk.white(`📄 Slides: ${ppt.totalSlides}`));
906
+ }
907
+ return;
908
+ }
909
+ logger.always(chalk.green("\n📊 Presentation Generated Successfully!"));
910
+ logger.always(chalk.gray("─".repeat(50)));
911
+ if (ppt.filePath) {
912
+ logger.always(chalk.white(` 📁 File: ${ppt.filePath}`));
913
+ }
914
+ if (ppt.totalSlides) {
915
+ logger.always(chalk.white(` 📄 Slides: ${ppt.totalSlides}`));
916
+ }
917
+ if (ppt.format) {
918
+ logger.always(chalk.white(` 📋 Format: ${ppt.format.toUpperCase()}`));
919
+ }
920
+ logger.always(chalk.gray("─".repeat(50)));
921
+ logger.always(chalk.cyan("💡 Tip: Open the file with PowerPoint or Google Slides to view."));
922
+ }
923
+ catch (error) {
924
+ handleError(error, "PPT Output");
925
+ }
926
+ }
796
927
  // Helper method to validate token usage data with fallback handling
797
928
  static isValidTokenUsage(tokens) {
798
929
  if (!tokens || typeof tokens !== "object" || tokens === null) {
@@ -916,8 +1047,9 @@ export class CLICommandFactory {
916
1047
  .example('$0 generate "Describe this video" --video path/to/video.mp4', "Analyze video content")
917
1048
  .example('$0 generate "Product showcase video" --image ./product.jpg --outputMode video --videoOutput ./output.mp4', "Generate video from image")
918
1049
  .example('$0 generate "Smooth camera movement" --image ./input.jpg --provider vertex --model veo-3.1-generate-001 --outputMode video --videoResolution 720p --videoLength 6 --videoAspectRatio 16:9 --videoOutput ./output.mp4', "Video generation with full options")
919
- .example('$0 generate "Explain AI" --provider anthropic --subscription-tier pro', "Use Anthropic with Pro subscription tier")
920
- .example('$0 generate "Deep analysis" --provider anthropic-subscription --subscription-tier max --auth-method oauth', "Use Anthropic with Max subscription and OAuth"));
1050
+ .example('$0 generate "AI in Healthcare" --pptPages 10', "Generate a PowerPoint presentation")
1051
+ .example('$0 generate "Company Q4 Results" --pptPages 15 --pptTheme corporate --pptAudience business', "Generate presentation with options")
1052
+ .example('$0 generate "Machine Learning 101" --pptTheme minimal --pptTone educational --pptNoImages', "Generate educational slides without AI images"));
921
1053
  },
922
1054
  handler: async (argv) => await CLICommandFactory.executeGenerate(argv),
923
1055
  };
@@ -1449,72 +1581,196 @@ export class CLICommandFactory {
1449
1581
  }
1450
1582
  }
1451
1583
  /**
1452
- * Execute the generate command
1584
+ * Handle stdin input for generate command
1453
1585
  */
1454
- static async executeGenerate(argv) {
1455
- // Handle stdin input if no input provided
1586
+ static async handleGenerateStdinInput(argv) {
1456
1587
  if (!argv.input && !process.stdin.isTTY) {
1457
1588
  let stdinData = "";
1458
1589
  process.stdin.setEncoding("utf8");
1459
1590
  for await (const chunk of process.stdin) {
1460
1591
  stdinData += chunk;
1461
1592
  }
1462
- argv.input = stdinData.trim();
1463
- if (!argv.input) {
1593
+ const trimmedData = stdinData.trim();
1594
+ if (!trimmedData) {
1464
1595
  throw new Error("No input received from stdin");
1465
1596
  }
1597
+ return trimmedData;
1466
1598
  }
1467
1599
  else if (!argv.input) {
1468
1600
  throw new Error('Input required. Use: neurolink generate "your prompt" or echo "prompt" | neurolink generate');
1469
1601
  }
1470
- const options = CLICommandFactory.processOptions(argv);
1471
- // Validate Anthropic subscription options if using Anthropic provider
1472
- this.validateAnthropicSubscriptionOptions(options);
1473
- // Determine if video generation mode is enabled
1474
- const isVideoMode = options.outputMode === "video";
1602
+ return argv.input;
1603
+ }
1604
+ /**
1605
+ * Detect output mode (video, ppt, or text) based on CLI arguments
1606
+ */
1607
+ static detectGenerateOutputMode(argv, options) {
1608
+ const outputMode = options.outputMode;
1609
+ const isVideoMode = outputMode === "video";
1610
+ const hasPPTFlags = argv.pptPages !== undefined ||
1611
+ argv.pptTheme !== undefined ||
1612
+ argv.pptAudience !== undefined ||
1613
+ argv.pptTone !== undefined ||
1614
+ argv.pptOutput !== undefined ||
1615
+ argv.pptAspectRatio !== undefined ||
1616
+ argv.pptNoImages === true;
1617
+ const hasVideoSignals = outputMode === "video" || argv.videoOutput !== undefined;
1618
+ const hasPPTSignals = outputMode === "ppt" || hasPPTFlags;
1619
+ if (hasVideoSignals && hasPPTSignals) {
1620
+ throw new Error("Conflicting output mode signals detected. Use either video mode (--outputMode video, optionally with --videoOutput) or PPT mode (--outputMode ppt / --ppt* flags), not both.");
1621
+ }
1622
+ const isPPTMode = outputMode === "ppt" || hasPPTFlags;
1475
1623
  const spinnerMessage = isVideoMode
1476
1624
  ? "🎬 Generating video... (this may take 1-2 minutes)"
1477
- : "🤖 Generating text...";
1625
+ : isPPTMode
1626
+ ? "📊 Generating presentation... (this may take 2-5 minutes)"
1627
+ : "🤖 Generating text...";
1628
+ return { isVideoMode, isPPTMode, spinnerMessage };
1629
+ }
1630
+ /**
1631
+ * Process context for generation command
1632
+ */
1633
+ static processGenerateContext(inputText, options) {
1634
+ let processedInputText = inputText;
1635
+ let contextMetadata;
1636
+ if (options.context && options.contextConfig) {
1637
+ const processedContextResult = ContextFactory.processContext(options.context, options.contextConfig);
1638
+ if (processedContextResult.processedContext) {
1639
+ processedInputText =
1640
+ processedContextResult.processedContext + processedInputText;
1641
+ }
1642
+ contextMetadata = {
1643
+ ...ContextFactory.extractAnalyticsContext(options.context),
1644
+ contextMode: processedContextResult.config.mode,
1645
+ contextTruncated: processedContextResult.metadata.truncated,
1646
+ };
1647
+ if (options.debug) {
1648
+ logger.debug("Context processed:", {
1649
+ mode: processedContextResult.config.mode,
1650
+ truncated: processedContextResult.metadata.truncated,
1651
+ processingTime: processedContextResult.metadata.processingTime,
1652
+ });
1653
+ }
1654
+ }
1655
+ return { inputText: processedInputText, contextMetadata };
1656
+ }
1657
+ /**
1658
+ * Build multimodal input from CLI arguments
1659
+ */
1660
+ static buildGenerateMultimodalInput(inputText, argv) {
1661
+ const imageBuffers = CLICommandFactory.processCliImages(argv.image);
1662
+ const csvFiles = CLICommandFactory.processCliCSVFiles(argv.csv);
1663
+ const pdfFiles = CLICommandFactory.processCliPDFFiles(argv.pdf);
1664
+ const videoFiles = CLICommandFactory.processCliVideoFiles(argv.video);
1665
+ const files = CLICommandFactory.processCliFiles(argv.file);
1666
+ return {
1667
+ text: inputText,
1668
+ ...(imageBuffers && { images: imageBuffers }),
1669
+ ...(csvFiles && { csvFiles }),
1670
+ ...(pdfFiles && { pdfFiles }),
1671
+ ...(videoFiles && { videoFiles }),
1672
+ ...(files && { files }),
1673
+ };
1674
+ }
1675
+ /**
1676
+ * Build output configuration for generate request
1677
+ */
1678
+ static buildGenerateOutputConfig(isVideoMode, isPPTMode, enhancedOptions) {
1679
+ if (isVideoMode) {
1680
+ return {
1681
+ mode: "video",
1682
+ video: {
1683
+ resolution: enhancedOptions.videoResolution,
1684
+ length: enhancedOptions.videoLength,
1685
+ aspectRatio: enhancedOptions.videoAspectRatio,
1686
+ audio: enhancedOptions.videoAudio,
1687
+ },
1688
+ };
1689
+ }
1690
+ if (isPPTMode) {
1691
+ return {
1692
+ mode: "ppt",
1693
+ ppt: {
1694
+ pages: enhancedOptions.pptPages || 10,
1695
+ theme: enhancedOptions.pptTheme,
1696
+ audience: enhancedOptions.pptAudience,
1697
+ tone: enhancedOptions.pptTone,
1698
+ aspectRatio: enhancedOptions.pptAspectRatio || "16:9",
1699
+ generateAIImages: !enhancedOptions.pptNoImages,
1700
+ outputPath: enhancedOptions.pptOutput,
1701
+ },
1702
+ };
1703
+ }
1704
+ return undefined;
1705
+ }
1706
+ /**
1707
+ * Handle successful generation result
1708
+ */
1709
+ static async handleGenerateSuccess(result, options, isVideoMode, isPPTMode, spinner) {
1710
+ const genResult = result;
1711
+ if (spinner) {
1712
+ if (isVideoMode) {
1713
+ spinner.succeed(chalk.green("✅ Video generated successfully!"));
1714
+ }
1715
+ else if (isPPTMode) {
1716
+ spinner.succeed(chalk.green("✅ Presentation generated successfully!"));
1717
+ }
1718
+ else {
1719
+ spinner.succeed(chalk.green("✅ Text generated successfully!"));
1720
+ }
1721
+ }
1722
+ if (!options.quiet) {
1723
+ const providerInfo = genResult.provider || "auto";
1724
+ const modelInfo = genResult.model || "default";
1725
+ logger.always(chalk.gray(`🔧 Provider: ${providerInfo} | Model: ${modelInfo}`));
1726
+ }
1727
+ if (!isVideoMode && !isPPTMode) {
1728
+ this.handleOutput(genResult, options);
1729
+ }
1730
+ await this.handleTTSOutput(genResult, options);
1731
+ await this.handleVideoOutput(genResult, options);
1732
+ await this.handlePPTOutput(genResult, options);
1733
+ if (options.debug) {
1734
+ logger.debug("\n" + chalk.yellow("Debug Information:"));
1735
+ logger.debug("Provider:", genResult.provider);
1736
+ logger.debug("Model:", genResult.model);
1737
+ if (genResult.analytics) {
1738
+ logger.debug("Analytics:", JSON.stringify(genResult.analytics, null, 2));
1739
+ }
1740
+ if (genResult.evaluation) {
1741
+ logger.debug("Evaluation:", JSON.stringify(genResult.evaluation, null, 2));
1742
+ }
1743
+ }
1744
+ if (!globalSession.getCurrentSessionId()) {
1745
+ await this.flushLangfuseTraces();
1746
+ process.exit(0);
1747
+ }
1748
+ }
1749
+ /**
1750
+ * Execute the generate command
1751
+ */
1752
+ static async executeGenerate(argv) {
1753
+ // Handle stdin input
1754
+ const rawInput = await this.handleGenerateStdinInput(argv);
1755
+ argv.input = rawInput;
1756
+ const options = this.processOptions(argv);
1757
+ // Detect output mode
1758
+ const { isVideoMode, isPPTMode, spinnerMessage } = this.detectGenerateOutputMode(argv, options);
1478
1759
  const spinner = argv.quiet ? null : ora(spinnerMessage).start();
1479
1760
  try {
1480
1761
  // Add delay if specified
1481
1762
  if (options.delay) {
1482
1763
  await new Promise((resolve) => setTimeout(resolve, options.delay));
1483
1764
  }
1484
- // Process context if provided
1485
- let inputText = argv.input;
1486
- let contextMetadata;
1487
- if (options.context && options.contextConfig) {
1488
- const processedContextResult = ContextFactory.processContext(options.context, options.contextConfig);
1489
- // Integrate context into prompt if configured
1490
- if (processedContextResult.processedContext) {
1491
- inputText = processedContextResult.processedContext + inputText;
1492
- }
1493
- // Add context metadata for analytics
1494
- contextMetadata = {
1495
- ...ContextFactory.extractAnalyticsContext(options.context),
1496
- contextMode: processedContextResult.config.mode,
1497
- contextTruncated: processedContextResult.metadata.truncated,
1498
- };
1499
- if (options.debug) {
1500
- logger.debug("Context processed:", {
1501
- mode: processedContextResult.config.mode,
1502
- truncated: processedContextResult.metadata.truncated,
1503
- processingTime: processedContextResult.metadata.processingTime,
1504
- });
1505
- }
1506
- }
1765
+ // Process context
1766
+ const { inputText, contextMetadata } = this.processGenerateContext(rawInput, options);
1507
1767
  // Handle dry-run mode for testing
1508
1768
  if (options.dryRun) {
1509
1769
  const mockResult = {
1510
1770
  content: "Mock response for testing purposes",
1511
1771
  provider: options.provider || "auto",
1512
1772
  model: options.model || "test-model",
1513
- usage: {
1514
- input: 10,
1515
- output: 15,
1516
- total: 25,
1517
- },
1773
+ usage: { input: 10, output: 15, total: 25 },
1518
1774
  responseTime: 150,
1519
1775
  analytics: options.enableAnalytics
1520
1776
  ? {
@@ -1541,7 +1797,7 @@ export class CLICommandFactory {
1541
1797
  if (spinner) {
1542
1798
  spinner.succeed(chalk.green("✅ Dry-run completed successfully!"));
1543
1799
  }
1544
- CLICommandFactory.handleOutput(mockResult, options);
1800
+ this.handleOutput(mockResult, options);
1545
1801
  if (options.debug) {
1546
1802
  logger.debug("\n" + chalk.yellow("Debug Information (Dry-run):"));
1547
1803
  logger.debug("Provider:", mockResult.provider);
@@ -1552,7 +1808,9 @@ export class CLICommandFactory {
1552
1808
  await CLICommandFactory.flushLangfuseTraces();
1553
1809
  process.exit(0);
1554
1810
  }
1811
+ return;
1555
1812
  }
1813
+ // Initialize SDK and session
1556
1814
  const sdk = globalSession.getOrCreateNeuroLink();
1557
1815
  const sessionVariables = globalSession.getSessionVariables();
1558
1816
  const enhancedOptions = { ...options, ...sessionVariables };
@@ -1566,24 +1824,17 @@ export class CLICommandFactory {
1566
1824
  toolsEnabled: !options.disableTools,
1567
1825
  });
1568
1826
  }
1569
- // Video generation doesn't support tools, so auto-disable them
1827
+ // Configure mode-specific options
1570
1828
  if (isVideoMode) {
1571
1829
  CLICommandFactory.configureVideoMode(enhancedOptions, argv, options);
1572
1830
  }
1573
- // Process CLI multimodal inputs
1574
- const imageBuffers = CLICommandFactory.processCliImages(argv.image);
1575
- const csvFiles = CLICommandFactory.processCliCSVFiles(argv.csv);
1576
- const pdfFiles = CLICommandFactory.processCliPDFFiles(argv.pdf);
1577
- const videoFiles = CLICommandFactory.processCliVideoFiles(argv.video);
1578
- const files = CLICommandFactory.processCliFiles(argv.file);
1579
- const generateInput = {
1580
- text: inputText,
1581
- ...(imageBuffers && { images: imageBuffers }),
1582
- ...(csvFiles && { csvFiles }),
1583
- ...(pdfFiles && { pdfFiles }),
1584
- ...(videoFiles && { videoFiles }),
1585
- ...(files && { files }),
1586
- };
1831
+ if (isPPTMode) {
1832
+ this.configurePPTMode(enhancedOptions, argv, options);
1833
+ }
1834
+ // Build multimodal input and output configuration
1835
+ const generateInput = this.buildGenerateMultimodalInput(inputText, argv);
1836
+ const outputConfig = this.buildGenerateOutputConfig(isVideoMode, isPPTMode, enhancedOptions);
1837
+ // Execute generation
1587
1838
  const result = await sdk.generate({
1588
1839
  input: generateInput,
1589
1840
  csvOptions: {
@@ -1596,18 +1847,7 @@ export class CLICommandFactory {
1596
1847
  format: argv.videoFormat,
1597
1848
  transcribeAudio: argv.transcribeAudio,
1598
1849
  },
1599
- // Video generation output configuration
1600
- output: isVideoMode
1601
- ? {
1602
- mode: "video",
1603
- video: {
1604
- resolution: enhancedOptions.videoResolution,
1605
- length: enhancedOptions.videoLength,
1606
- aspectRatio: enhancedOptions.videoAspectRatio,
1607
- audio: enhancedOptions.videoAudio,
1608
- },
1609
- }
1610
- : undefined,
1850
+ output: outputConfig,
1611
1851
  provider: enhancedOptions.provider,
1612
1852
  model: enhancedOptions.model,
1613
1853
  temperature: enhancedOptions.temperature,
@@ -1641,58 +1881,9 @@ export class CLICommandFactory {
1641
1881
  topK: argv.ragTopK,
1642
1882
  }
1643
1883
  : undefined,
1644
- // TTS configuration
1645
- tts: enhancedOptions.tts
1646
- ? {
1647
- enabled: true,
1648
- useAiResponse: true,
1649
- voice: enhancedOptions.ttsVoice,
1650
- format: enhancedOptions.ttsFormat ||
1651
- undefined,
1652
- speed: enhancedOptions.ttsSpeed,
1653
- quality: enhancedOptions.ttsQuality,
1654
- output: enhancedOptions.ttsOutput,
1655
- play: enhancedOptions.ttsPlay,
1656
- }
1657
- : undefined,
1658
1884
  });
1659
- if (spinner) {
1660
- if (isVideoMode) {
1661
- spinner.succeed(chalk.green("✅ Video generated successfully!"));
1662
- }
1663
- else {
1664
- spinner.succeed(chalk.green("✅ Text generated successfully!"));
1665
- }
1666
- }
1667
- // Display provider and model info by default (unless quiet mode)
1668
- if (!options.quiet) {
1669
- const providerInfo = result.provider || "auto";
1670
- const modelInfo = result.model || "default";
1671
- logger.always(chalk.gray(`🔧 Provider: ${providerInfo} | Model: ${modelInfo}`));
1672
- }
1673
- // Handle output with universal formatting (for text mode)
1674
- if (!isVideoMode) {
1675
- CLICommandFactory.handleOutput(result, options);
1676
- }
1677
- // Handle TTS audio file output if --tts-output is provided
1678
- await CLICommandFactory.handleTTSOutput(result, options);
1679
- // Handle video file output if --videoOutput is provided
1680
- await CLICommandFactory.handleVideoOutput(result, options);
1681
- if (options.debug) {
1682
- logger.debug("\n" + chalk.yellow("Debug Information:"));
1683
- logger.debug("Provider:", result.provider);
1684
- logger.debug("Model:", result.model);
1685
- if (result.analytics) {
1686
- logger.debug("Analytics:", JSON.stringify(result.analytics, null, 2));
1687
- }
1688
- if (result.evaluation) {
1689
- logger.debug("Evaluation:", JSON.stringify(result.evaluation, null, 2));
1690
- }
1691
- }
1692
- if (!globalSession.getCurrentSessionId()) {
1693
- await CLICommandFactory.flushLangfuseTraces();
1694
- process.exit(0);
1695
- }
1885
+ // Handle successful result
1886
+ await this.handleGenerateSuccess(result, options, isVideoMode, isPPTMode, spinner);
1696
1887
  }
1697
1888
  catch (error) {
1698
1889
  if (spinner) {
@@ -22,5 +22,5 @@ export { SlideGenerator, createSlideGenerator, generateSlidesFromPlan, PptxGenJS
22
22
  export { generatePresentation } from "./presentationOrchestrator.js";
23
23
  export { validatePPTGenerationInput, validatePPTOutputOptions, validatePPTProvider, } from "../../utils/parameterValidation.js";
24
24
  export type { EnhancedValidationResult as PPTValidationResult } from "../../types/tools.js";
25
- export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, } from "./utils.js";
25
+ export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, validateImageBuffer, } from "./utils.js";
26
26
  export type { EffectivePPTProviderResult } from "./types.js";
@@ -37,4 +37,4 @@ export { generatePresentation } from "./presentationOrchestrator.js";
37
37
  // Validation (re-export from parameterValidation for convenience)
38
38
  export { validatePPTGenerationInput, validatePPTOutputOptions, validatePPTProvider, } from "../../utils/parameterValidation.js";
39
39
  // PPT Provider Utilities and Helper Functions
40
- export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, } from "./utils.js";
40
+ export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, validateImageBuffer, } from "./utils.js";
@@ -22,5 +22,5 @@ export { SlideGenerator, createSlideGenerator, generateSlidesFromPlan, PptxGenJS
22
22
  export { generatePresentation } from "./presentationOrchestrator.js";
23
23
  export { validatePPTGenerationInput, validatePPTOutputOptions, validatePPTProvider, } from "../../utils/parameterValidation.js";
24
24
  export type { EnhancedValidationResult as PPTValidationResult } from "../../types/tools.js";
25
- export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, } from "./utils.js";
25
+ export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, validateImageBuffer, } from "./utils.js";
26
26
  export type { EffectivePPTProviderResult } from "./types.js";
@@ -37,5 +37,5 @@ export { generatePresentation } from "./presentationOrchestrator.js";
37
37
  // Validation (re-export from parameterValidation for convenience)
38
38
  export { validatePPTGenerationInput, validatePPTOutputOptions, validatePPTProvider, } from "../../utils/parameterValidation.js";
39
39
  // PPT Provider Utilities and Helper Functions
40
- export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, } from "./utils.js";
40
+ export { PPT_VALID_PROVIDERS, getEffectivePPTProvider, generateOutputPath, ensureOutputDirectory, normalizeLogoConfig, getLayoutName, getFailureStage, toError, isObject, isLogoConfig, validateImageBuffer, } from "./utils.js";
41
41
  //# sourceMappingURL=index.js.map
@@ -68,8 +68,8 @@ export type GenerateCommandArgs = BaseCommandArgs & {
68
68
  thinkingLevel?: "minimal" | "low" | "medium" | "high";
69
69
  /** Vertex AI region */
70
70
  region?: string;
71
- /** Output mode - 'text' for standard generation, 'video' for video generation */
72
- outputMode?: "text" | "video";
71
+ /** Output mode - 'text' for standard generation, 'video' for video generation, 'ppt' for presentation */
72
+ outputMode?: "text" | "video" | "ppt";
73
73
  /** Path to save generated video file */
74
74
  videoOutput?: string;
75
75
  /** Video output resolution (720p or 1080p) */
@@ -80,6 +80,20 @@ export type GenerateCommandArgs = BaseCommandArgs & {
80
80
  videoAspectRatio?: "9:16" | "16:9";
81
81
  /** Enable/disable audio generation in video */
82
82
  videoAudio?: boolean;
83
+ /** Number of slides to generate (5-50) */
84
+ pptPages?: number;
85
+ /** Presentation theme/style */
86
+ pptTheme?: "modern" | "corporate" | "creative" | "minimal" | "dark";
87
+ /** Target audience */
88
+ pptAudience?: "business" | "students" | "technical" | "general";
89
+ /** Presentation tone/style */
90
+ pptTone?: "professional" | "casual" | "educational" | "persuasive";
91
+ /** Path to save generated PPTX file */
92
+ pptOutput?: string;
93
+ /** PPT aspect ratio */
94
+ pptAspectRatio?: "16:9" | "4:3";
95
+ /** Disable AI image generation for PPT slides */
96
+ pptNoImages?: boolean;
83
97
  /** Custom path for generated image output */
84
98
  imageOutput?: string;
85
99
  };
@@ -371,6 +385,8 @@ export type GenerateResult = CommandResult & {
371
385
  audio?: import("./index.js").TTSResult;
372
386
  /** Video generation result when video mode is enabled */
373
387
  video?: import("./multimodal.js").VideoGenerationResult;
388
+ /** PPT generation result when ppt mode is enabled */
389
+ ppt?: import("./pptTypes.js").PPTGenerationResult;
374
390
  imageOutput?: {
375
391
  base64: string;
376
392
  savedPath?: string;
@@ -471,8 +471,8 @@ export type GenerateResult = {
471
471
  * });
472
472
  *
473
473
  * if (result.ppt) {
474
- * console.log(`Generated ${result.ppt.slides.length} slides`);
475
- * console.log(`Title: ${result.ppt.slides[0].title}`);
474
+ * console.log(`Generated ${result.ppt.totalSlides} slides`);
475
+ * console.log(`Saved at: ${result.ppt.filePath}`);
476
476
  * }
477
477
  * ```
478
478
  */
@@ -68,8 +68,8 @@ export type GenerateCommandArgs = BaseCommandArgs & {
68
68
  thinkingLevel?: "minimal" | "low" | "medium" | "high";
69
69
  /** Vertex AI region */
70
70
  region?: string;
71
- /** Output mode - 'text' for standard generation, 'video' for video generation */
72
- outputMode?: "text" | "video";
71
+ /** Output mode - 'text' for standard generation, 'video' for video generation, 'ppt' for presentation */
72
+ outputMode?: "text" | "video" | "ppt";
73
73
  /** Path to save generated video file */
74
74
  videoOutput?: string;
75
75
  /** Video output resolution (720p or 1080p) */
@@ -80,6 +80,20 @@ export type GenerateCommandArgs = BaseCommandArgs & {
80
80
  videoAspectRatio?: "9:16" | "16:9";
81
81
  /** Enable/disable audio generation in video */
82
82
  videoAudio?: boolean;
83
+ /** Number of slides to generate (5-50) */
84
+ pptPages?: number;
85
+ /** Presentation theme/style */
86
+ pptTheme?: "modern" | "corporate" | "creative" | "minimal" | "dark";
87
+ /** Target audience */
88
+ pptAudience?: "business" | "students" | "technical" | "general";
89
+ /** Presentation tone/style */
90
+ pptTone?: "professional" | "casual" | "educational" | "persuasive";
91
+ /** Path to save generated PPTX file */
92
+ pptOutput?: string;
93
+ /** PPT aspect ratio */
94
+ pptAspectRatio?: "16:9" | "4:3";
95
+ /** Disable AI image generation for PPT slides */
96
+ pptNoImages?: boolean;
83
97
  /** Custom path for generated image output */
84
98
  imageOutput?: string;
85
99
  };
@@ -371,6 +385,8 @@ export type GenerateResult = CommandResult & {
371
385
  audio?: import("./index.js").TTSResult;
372
386
  /** Video generation result when video mode is enabled */
373
387
  video?: import("./multimodal.js").VideoGenerationResult;
388
+ /** PPT generation result when ppt mode is enabled */
389
+ ppt?: import("./pptTypes.js").PPTGenerationResult;
374
390
  imageOutput?: {
375
391
  base64: string;
376
392
  savedPath?: string;
@@ -471,8 +471,8 @@ export type GenerateResult = {
471
471
  * });
472
472
  *
473
473
  * if (result.ppt) {
474
- * console.log(`Generated ${result.ppt.slides.length} slides`);
475
- * console.log(`Title: ${result.ppt.slides[0].title}`);
474
+ * console.log(`Generated ${result.ppt.totalSlides} slides`);
475
+ * console.log(`Saved at: ${result.ppt.filePath}`);
476
476
  * }
477
477
  * ```
478
478
  */
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@juspay/neurolink",
3
- "version": "9.23.0",
3
+ "version": "9.24.0",
4
4
  "description": "Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and deploy AI applications with 13 providers: OpenAI, Anthropic, Google AI, AWS Bedrock, Azure, Hugging Face, Ollama, and Mistral AI.",
5
5
  "author": {
6
6
  "name": "Juspay Technologies",