smart_prompt 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e8afddc47f83b453b4b7165520beac3a59583b7324eb6f7f41b6cc55fd4b5a4a
4
- data.tar.gz: e8a430a5068ca9b1f40cf5bdf42dbb4aae762d991308c5049ee7f3ea781a09f8
3
+ metadata.gz: ced076006bec94a02a4612d9d86794c6a1468b50b386ce99e67f9905631db289
4
+ data.tar.gz: f815ef18d7a297552db4aeb679d4213d03662f532876f2780b6374b59d9fb40a
5
5
  SHA512:
6
- metadata.gz: 3f0ca77b523017030b53fa562229d93b7f297ec118a1ad5f19b8af912e630a84a4251a0f73fc2800bdfe4c0bb6522154872aae105a57c4d405f83b807d662467
7
- data.tar.gz: '0946bfb17ec34230af139fa39b9adfbfc6d3b37d714b312a9df123d18ab73a5fe711ed7818b77d6f2c8e28de1edd478d0061bea790872c799fca55d7cc54cdaf'
6
+ metadata.gz: 003dbe0204b9eb84a04de6ac3520903fdf92e8621fe79cc7c9fd4360e1f4bb845a425ebd59b82cfe82b8c726ada32ba5dfc7576c2b66a16bf3f95f1e5f6fd9a0
7
+ data.tar.gz: ef2887559b141422b9704a5f80076d10037b25c3d0d1d0d27fbb9d6b9a772419789c5d43bc9f68a15c429f19402ef99fe002cf11cfcf719082d37753f1aa8d6e
data/README.cn.md CHANGED
@@ -1,27 +1,394 @@
1
+ [English](./README.md) | 中文
2
+
1
3
  # SmartPrompt
2
4
 
3
- SmartPrompt 是一个强大的 Ruby gem,提供了一种领域特定语言(DSL),使其他 Ruby 程序能够更加方便、自然地调用各种大型语言模型(LLM)的能力。
5
+ [![Gem Version](https://badge.fury.io/rb/smart_prompt.svg)](https://badge.fury.io/rb/smart_prompt)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ SmartPrompt 是一个强大的 Ruby gem,提供了优雅的领域特定语言(DSL),用于构建基于大型语言模型(LLM)的智能应用程序。它使 Ruby 程序能够无缝地与各种 LLM 服务提供商交互,同时保持清晰、可组合和高度可定制的代码架构。
9
+
10
+ ## 🚀 核心特性
4
11
 
5
- ## 主要特性
12
+ ### 多 LLM 支持
13
+ - **OpenAI API 兼容**: 完全支持 OpenAI GPT 模型和兼容的 API
14
+ - **Llama.cpp 集成**: 直接集成本地 Llama.cpp 服务器
15
+ - **可扩展适配器**: 易于扩展的适配器系统,支持新的 LLM 提供商
16
+ - **统一接口**: 无论底层 LLM 提供商如何,都使用相同的 API
6
17
 
7
- - 灵活的任务组合:以特定服务提供商 + 特定 LLM + 特定 prompt 的方式组合各种任务
8
- - 子任务嵌套:支持以 DSL 形式组合调用其他子任务
9
- - 性能优化:在保证质量的同时,提供性能最优或成本最低的解决方案
18
+ ### 灵活架构
19
+ - **基于 Worker 的任务**: 为特定 AI 任务定义可重用的 Worker
20
+ - **模板系统**: 基于 ERB 的提示词模板,支持参数注入
21
+ - **对话管理**: 内置对话历史和上下文管理
22
+ - **流式支持**: 实时响应流,提供更好的用户体验
10
23
 
11
- ## 安装
24
+ ### 高级功能
25
+ - **工具调用**: 原生支持函数调用和工具集成
26
+ - **重试逻辑**: 强大的错误处理机制,支持可配置的重试
27
+ - **嵌入向量**: 文本嵌入生成,用于语义搜索和 RAG 应用
28
+ - **配置驱动**: 基于 YAML 的配置,便于部署管理
12
29
 
13
- gem 安装并添加到应用程序的 Gemfile 中,执行以下命令:
30
+ ### 生产就绪
31
+ - **全面日志记录**: 详细的日志记录,用于调试和监控
32
+ - **错误处理**: 优雅的错误处理,包含自定义异常类型
33
+ - **性能优化**: 高效的资源使用和响应缓存
34
+ - **线程安全**: 支持多线程应用中的并发使用
14
35
 
36
+ ## 📦 安装
37
+
38
+ 添加到你的 Gemfile:
39
+
40
+ ```ruby
41
+ gem 'smart_prompt'
15
42
  ```
16
- $ bundle add smart_prompt
43
+
44
+ 然后执行:
45
+ ```bash
46
+ $ bundle install
47
+ ```
48
+
49
+ 或直接安装:
50
+ ```bash
51
+ $ gem install smart_prompt
52
+ ```
53
+
54
+ ## 🛠️ 快速开始
55
+
56
+ ### 1. 配置
57
+
58
+ 创建 YAML 配置文件(`config/smart_prompt.yml`):
59
+
60
+ ```yaml
61
+ # 适配器定义
62
+ adapters:
63
+ openai: OpenAIAdapter
64
+ # LLM 配置
65
+ llms:
66
+ SiliconFlow:
67
+ adapter: openai
68
+ url: https://api.siliconflow.cn/v1/
69
+ api_key: ENV["APIKey"]
70
+ default_model: Qwen/Qwen2.5-7B-Instruct
71
+ llamacpp:
72
+ adapter: openai
73
+ url: http://localhost:8080/
74
+ ollama:
75
+ adapter: openai
76
+ url: http://localhost:11434/
77
+ default_model: deepseek-r1
78
+ deepseek:
79
+ adapter: openai
80
+ url: https://api.deepseek.com
81
+ api_key: ENV["DSKEY"]
82
+ default_model: deepseek-reasoner
83
+
84
+ # 默认设置
85
+ default_llm: SiliconFlow
86
+ template_path: "./templates"
87
+ worker_path: "./workers"
88
+ logger_file: "./logs/smart_prompt.log"
17
89
  ```
18
90
 
19
- 如果不使用 bundler 来管理依赖,可以通过执行以下命令来安装 gem:
91
+ ### 2. 创建提示词模板
92
+
93
+ 在 `templates/` 目录中创建模板文件:
94
+
95
+ **templates/chat.erb**:
96
+ ```erb
97
+ 你是一个有用的助手。请回答以下问题:
20
98
 
99
+ 问题:<%= question %>
100
+
101
+ 背景:<%= context || "未提供额外背景信息" %>
21
102
  ```
22
- $ gem install smart_prompt
103
+
104
+ ### 3. 定义 Worker
105
+
106
+ 在 `workers/` 目录中创建 worker 文件:
107
+
108
+ **workers/chat_worker.rb**:
109
+ ```ruby
110
+ SmartPrompt.define_worker :chat_assistant do
111
+ # 使用特定的 LLM
112
+ use "SiliconFlow"
113
+ model "deepseek-ai/DeepSeek-V3"
114
+ # 设置系统消息
115
+ sys_msg("你是一个有用的 AI 助手。", params)
116
+ # 使用模板和参数
117
+ prompt(:chat, {
118
+ question: params[:question],
119
+ context: params[:context]
120
+ })
121
+ # 发送消息并返回响应
122
+ send_msg
123
+ end
124
+ ```
125
+
126
+ ### 4. 在应用中使用
127
+
128
+ ```ruby
129
+ require 'smart_prompt'
130
+
131
+ # 使用配置初始化引擎
132
+ engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
133
+
134
+ # 执行 worker
135
+ result = engine.call_worker(:chat_assistant, {
136
+ question: "什么是机器学习?",
137
+ context: "我们正在讨论 AI 技术"
138
+ })
139
+
140
+ puts result
141
+ ```
142
+
143
+ ## 📚 高级用法
144
+
145
+ ### 流式响应
146
+
147
+ ```ruby
148
+ # 定义流式 worker
149
+ SmartPrompt.define_worker :streaming_chat do
150
+ use "deepseek"
151
+ model "deepseek-chat"
152
+ sys_msg("你是一个有用的助手。")
153
+ prompt(params[:message])
154
+ send_msg
155
+ end
156
+
157
+ # 使用流式处理
158
+ engine.call_worker_by_stream(:streaming_chat, {
159
+ message: "给我讲个故事"
160
+ }) do |chunk, bytesize|
161
+ print chunk.dig("choices", 0, "delta", "content")
162
+ end
163
+ ```
164
+
165
+ ### 工具集成
166
+
167
+ ```ruby
168
+ # 定义带工具的 worker
169
+ SmartPrompt.define_worker :assistant_with_tools do
170
+ use "SiliconFlow"
171
+ model "Qwen/Qwen3-235B-A22B"
172
+ tools = [
173
+ {
174
+ type: "function",
175
+ function: {
176
+ name: "get_weather",
177
+ description: "获取指定位置的天气信息",
178
+ parameters: {
179
+ type: "object",
180
+ properties: {
181
+ location: {
182
+ type: "string",
183
+ description: "城市和省份"
184
+ }
185
+ },
186
+ required: ["location"]
187
+ }
188
+ }
189
+ }
190
+ ]
191
+
192
+ sys_msg("你可以使用可用的工具帮助处理天气查询。", params)
193
+ prompt(params[:message])
194
+ params.merge(tools: tools)
195
+ send_msg
196
+ end
197
+ ```
198
+
199
+ ### 对话历史
200
+
201
+ ```ruby
202
+ SmartPrompt.define_worker :conversational_chat do
203
+ use "deepseek"
204
+ model "deepseek-chat"
205
+ sys_msg("你是一个记住对话上下文的有用助手。")
206
+ prompt(params[:message], with_history: true)
207
+ send_msg
208
+ end
209
+ ```
210
+
211
+ ### 嵌入向量生成
212
+
213
+ ```ruby
214
+ SmartPrompt.define_worker :text_embedder do
215
+ use "SiliconFlow"
216
+ model "BAAI/bge-m3"
217
+ prompt params[:text]
218
+ embeddings(params[:dimensions] || 1024)
219
+ end
220
+
221
+ # 使用方法
222
+ embeddings = engine.call_worker(:text_embedder, {
223
+ text: "将此文本转换为嵌入向量",
224
+ dimensions: 1024
225
+ })
226
+ ```
227
+
228
+ ## 🏗️ 架构概述
229
+
230
+ SmartPrompt 采用模块化架构:
231
+
232
+ ```
233
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
234
+ │ 应用程序 │ │ SmartPrompt │ │ LLM 提供商 │
235
+ │ │◄──►│ 引擎 │◄──►│ (OpenAI/ │
236
+ │ │ │ │ │ Llama.cpp) │
237
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
238
+
239
+ ┌────────┼────────┐
240
+ │ │ │
241
+ ┌───▼───┐ ┌──▼──┐ ┌───▼────┐
242
+ │Worker │ │对话 │ │ 模板 │
243
+ │ │ │管理 │ │ 系统 │
244
+ └───────┘ └─────┘ └────────┘
245
+ ```
246
+
247
+ ### 核心组件
248
+
249
+ - **引擎(Engine)**: 中央编排器,管理配置、适配器和 Worker
250
+ - **Worker**: 包含嵌入业务逻辑的可重用任务定义
251
+ - **对话(Conversation)**: 上下文和消息历史管理
252
+ - **适配器(Adapters)**: LLM 提供商集成(OpenAI、Llama.cpp 等)
253
+ - **模板(Templates)**: 基于 ERB 的提示词模板系统
254
+
255
+ ## 🔧 配置参考
256
+
257
+ ### 适配器配置
258
+
259
+ ```yaml
260
+ adapters:
261
+ openai: "OpenAIAdapter" # 用于 OpenAI API
262
+ ```
263
+
264
+ ### LLM 配置
265
+
266
+ ```yaml
267
+ llms:
268
+ model_name:
269
+ adapter: "adapter_name"
270
+ api_key: "your_api_key" # 可以使用 ENV['KEY_NAME']
271
+ url: "https://api.url"
272
+ model: "model_identifier"
273
+ temperature: 0.7
274
+ # 其他提供商特定选项
275
+ ```
276
+
277
+ ### 路径配置
278
+
279
+ ```yaml
280
+ template_path: "./templates" # .erb 模板目录
281
+ worker_path: "./workers" # worker 定义目录
282
+ logger_file: "./logs/app.log" # 日志文件位置
283
+ ```
284
+
285
+ ## 🧪 测试
286
+
287
+ 运行测试套件:
288
+
289
+ ```bash
290
+ bundle exec rake test
291
+ ```
292
+
293
+ 开发时,可以使用控制台:
294
+
295
+ ```bash
296
+ bundle exec bin/console
297
+ ```
298
+
299
+ ## 🤝 集成示例
300
+
301
+ ### 与 Rails 应用集成
302
+
303
+ ```ruby
304
+ # config/initializers/smart_prompt.rb
305
+ class SmartPromptService
306
+ def self.engine
307
+ @engine ||= SmartPrompt::Engine.new(
308
+ Rails.root.join('config', 'smart_prompt.yml')
309
+ )
310
+ end
311
+
312
+ def self.chat(message, context: nil)
313
+ engine.call_worker(:chat_assistant, {
314
+ question: message,
315
+ context: context
316
+ })
317
+ end
318
+ end
319
+
320
+ # 在控制器中使用
321
+ class ChatController < ApplicationController
322
+ def create
323
+ response = SmartPromptService.chat(
324
+ params[:message],
325
+ context: session[:conversation_context]
326
+ )
327
+
328
+ render json: { response: response }
329
+ end
330
+ end
23
331
  ```
24
332
 
25
- ## 用法
333
+ ### 与 Sidekiq 后台任务集成
334
+
335
+ ```ruby
336
+ class LLMProcessingJob < ApplicationJob
337
+ def perform(task_type, parameters)
338
+ engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
339
+ result = engine.call_worker(task_type.to_sym, parameters)
340
+
341
+ # 处理结果...
342
+ NotificationService.send_completion(result)
343
+ end
344
+ end
345
+ ```
346
+
347
+ ## 🚀 实际应用场景
348
+
349
+ - **聊天机器人和对话式 AI**: 构建具有上下文感知能力的复杂聊天机器人
350
+ - **内容生成**: 基于模板驱动的提示词进行自动化内容创建
351
+ - **代码分析**: AI 驱动的代码审查和文档生成
352
+ - **客户支持**: 智能工单路由和响应建议
353
+ - **数据处理**: LLM 驱动的数据提取和转换
354
+ - **教育工具**: AI 导师和学习辅助系统
355
+
356
+ ## 🛣️ 发展路线图
357
+
358
+ - [ ] 新增 LLM 提供商适配器(Anthropic Claude、Google PaLM)
359
+ - [ ] 可视化提示词构建器和管理界面
360
+ - [ ] 增强缓存和性能优化
361
+ - [ ] 与向量数据库集成,支持 RAG 应用
362
+ - [ ] 内置提示词评估和测试框架
363
+ - [ ] 分布式 worker 执行支持
364
+
365
+ ## 🤝 贡献
366
+
367
+ 我们欢迎贡献!请查看 [CONTRIBUTING.md](CONTRIBUTING.md) 了解指南。
368
+
369
+ 1. Fork 本仓库
370
+ 2. 创建你的功能分支(`git checkout -b feature/amazing-feature`)
371
+ 3. 提交你的更改(`git commit -am 'Add amazing feature'`)
372
+ 4. 推送到分支(`git push origin feature/amazing-feature`)
373
+ 5. 开启一个 Pull Request
374
+
375
+ ## 📄 许可证
376
+
377
+ 本项目使用 MIT 许可证 - 详情请查看 [LICENSE.txt](LICENSE.txt) 文件。
378
+
379
+ ## 🙏 致谢
380
+
381
+ - 由 SmartPrompt 团队用 ❤️ 构建
382
+ - 受到在 Ruby 应用中优雅集成 LLM 需求的启发
383
+ - 感谢所有贡献者和 Ruby 社区
384
+
385
+ ## 📞 支持
386
+
387
+ - 📖 [文档](https://github.com/zhuangbiaowei/smart_prompt/wiki)
388
+ - 🐛 [问题追踪](https://github.com/zhuangbiaowei/smart_prompt/issues)
389
+ - 💬 [讨论区](https://github.com/zhuangbiaowei/smart_prompt/discussions)
390
+ - 📧 邮箱:zbw@kaiyuanshe.org
391
+
392
+ ---
26
393
 
27
- 更多示例请看项目 [SmartPromptDemo](https://github.com/zhuangbiaowei/SmartPromptDemo)
394
+ **SmartPrompt** - 让 Ruby 应用中的 LLM 集成变得简单、强大且优雅。
data/README.md CHANGED
@@ -2,28 +2,393 @@ EN | [中文](./README.cn.md)
2
2
 
3
3
  # SmartPrompt
4
4
 
5
- SmartPrompt is a powerful Ruby gem that provides a domain-specific language (DSL), enabling other Ruby programs to conveniently and naturally call upon the capabilities of various large language models (LLMs).
5
+ [![Gem Version](https://badge.fury.io/rb/smart_prompt.svg)](https://badge.fury.io/rb/smart_prompt)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
7
 
7
- ## Key Features
8
+ SmartPrompt is a powerful Ruby gem that provides an elegant domain-specific language (DSL) for building intelligent applications with Large Language Models (LLMs). It enables Ruby programs to seamlessly interact with various LLM providers while maintaining clean, composable, and highly customizable code architecture.
8
9
 
9
- - Flexible task composition: Combine various tasks using specific service providers + specific LLMs + specific prompts
10
- - Nested subtasks: Support for composing and calling other subtasks in DSL form
11
- - Performance optimization: Provide performance-optimal or cost-effective solutions while ensuring quality
10
+ ## 🚀 Key Features
12
11
 
13
- ## Installation
12
+ ### Multi-LLM Support
13
+ - **OpenAI API Compatible**: Full support for OpenAI GPT models and compatible APIs
14
+ - **Llama.cpp Integration**: Direct integration with local Llama.cpp servers
15
+ - **Extensible Adapters**: Easy-to-extend adapter system for new LLM providers
16
+ - **Unified Interface**: Same API regardless of the underlying LLM provider
14
17
 
15
- To install the gem and add it to your application's Gemfile, execute the following command:
18
+ ### Flexible Architecture
19
+ - **Worker-based Tasks**: Define reusable workers for specific AI tasks
20
+ - **Template System**: ERB-based prompt templates with parameter injection
21
+ - **Conversation Management**: Built-in conversation history and context management
22
+ - **Streaming Support**: Real-time response streaming for better user experience
16
23
 
24
+ ### Advanced Features
25
+ - **Tool Calling**: Native support for function calling and tool integration
26
+ - **Retry Logic**: Robust error handling with configurable retry mechanisms
27
+ - **Embeddings**: Text embedding generation for semantic search and RAG applications
28
+ - **Configuration-driven**: YAML-based configuration for easy deployment management
29
+
30
+ ### Production Ready
31
+ - **Comprehensive Logging**: Detailed logging for debugging and monitoring
32
+ - **Error Handling**: Graceful error handling with custom exception types
33
+ - **Performance Optimized**: Efficient resource usage and response caching
34
+ - **Thread Safe**: Safe for concurrent usage in multi-threaded applications
35
+
36
+ ## 📦 Installation
37
+
38
+ Add to your Gemfile:
39
+
40
+ ```ruby
41
+ gem 'smart_prompt'
42
+ ```
43
+
44
+ Then execute:
45
+ ```bash
46
+ $ bundle install
47
+ ```
48
+
49
+ Or install directly:
50
+ ```bash
51
+ $ gem install smart_prompt
52
+ ```
53
+
54
+ ## 🛠️ Quick Start
55
+
56
+ ### 1. Configuration
57
+
58
+ Create a YAML configuration file (`config/smart_prompt.yml`):
59
+
60
+ ```yaml
61
+ # Adapter definitions
62
+ adapters:
63
+ openai: OpenAIAdapter
64
+ # LLM configurations
65
+ llms:
66
+ SiliconFlow:
67
+ adapter: openai
68
+ url: https://api.siliconflow.cn/v1/
69
+ api_key: ENV["APIKey"]
70
+ default_model: Qwen/Qwen2.5-7B-Instruct
71
+ llamacpp:
72
+ adapter: openai
73
+ url: http://localhost:8080/
74
+ ollama:
75
+ adapter: openai
76
+ url: http://localhost:11434/
77
+ default_model: deepseek-r1
78
+ deepseek:
79
+ adapter: openai
80
+ url: https://api.deepseek.com
81
+ api_key: ENV["DSKEY"]
82
+ default_model: deepseek-reasoner
83
+
84
+ # Default settings
85
+ default_llm: SiliconFlow
86
+ template_path: "./templates"
87
+ worker_path: "./workers"
88
+ logger_file: "./logs/smart_prompt.log"
17
89
  ```
18
- $ bundle add smart_prompt
90
+
91
+ ### 2. Create Prompt Templates
92
+
93
+ Create template files in your `templates/` directory:
94
+
95
+ **templates/chat.erb**:
96
+ ```erb
97
+ You are a helpful assistant. Please respond to the following question:
98
+
99
+ Question: <%= question %>
100
+
101
+ Context: <%= context || "No additional context provided" %>
19
102
  ```
20
103
 
21
- If you don't use a bundler to manage dependencies, you can install the gem by executing the following command:
104
+ ### 3. Define Workers
22
105
 
106
+ Create worker files in your `workers/` directory:
107
+
108
+ **workers/chat_worker.rb**:
109
+ ```ruby
110
+ SmartPrompt.define_worker :chat_assistant do
111
+ # Use a specific LLM
112
+ use "SiliconFlow"
113
+ model "deepseek-ai/DeepSeek-V3"
114
+ # Set system message
115
+ sys_msg("You are a helpful AI assistant.", params)
116
+ # Use template with parameters
117
+ prompt(:chat, {
118
+ question: params[:question],
119
+ context: params[:context]
120
+ })
121
+ # Send message and return response
122
+ send_msg
123
+ end
23
124
  ```
24
- $ gem install smart_prompt
125
+
126
+ ### 4. Use in Your Application
127
+
128
+ ```ruby
129
+ require 'smart_prompt'
130
+
131
+ # Initialize engine with config
132
+ engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
133
+
134
+ # Execute worker
135
+ result = engine.call_worker(:chat_assistant, {
136
+ question: "What is machine learning?",
137
+ context: "We're discussing AI technologies"
138
+ })
139
+
140
+ puts result
141
+ ```
142
+
143
+ ## 📚 Advanced Usage
144
+
145
+ ### Streaming Responses
146
+
147
+ ```ruby
148
+ # Define streaming worker
149
+ SmartPrompt.define_worker :streaming_chat do
150
+ use "deepseek"
151
+ model "deepseek-chat"
152
+ sys_msg("You are a helpful assistant.")
153
+ prompt(params[:message])
154
+ send_msg
155
+ end
156
+
157
+ # Use with streaming
158
+ engine.call_worker_by_stream(:streaming_chat, {
159
+ message: "Tell me a story"
160
+ }) do |chunk, bytesize|
161
+ print chunk.dig("choices", 0, "delta", "content")
162
+ end
163
+ ```
164
+
165
+ ### Tool Integration
166
+
167
+ ```ruby
168
+ # Define worker with tools
169
+ SmartPrompt.define_worker :assistant_with_tools do
170
+ use "SiliconFlow"
171
+ model "Qwen/Qwen3-235B-A22B"
172
+ tools = [
173
+ {
174
+ type: "function",
175
+ function: {
176
+ name: "get_weather",
177
+ description: "Get weather information for a location",
178
+ parameters: {
179
+ type: "object",
180
+ properties: {
181
+ location: {
182
+ type: "string",
183
+ description: "The city and state"
184
+ }
185
+ },
186
+ required: ["location"]
187
+ }
188
+ }
189
+ }
190
+ ]
191
+
192
+ sys_msg("You can help with weather queries using available tools.", params)
193
+ prompt(params[:message])
194
+ params.merge(tools: tools)
195
+ send_msg
196
+ end
197
+ ```
198
+
199
+ ### Conversation History
200
+
201
+ ```ruby
202
+ SmartPrompt.define_worker :conversational_chat do
203
+ use "deepseek"
204
+ model "deepseek-chat"
205
+ sys_msg("You are a helpful assistant that remembers conversation context.")
206
+ prompt(params[:message], with_history: true)
207
+ send_msg
208
+ end
25
209
  ```
26
210
 
27
- ## Usage
211
+ ### Embeddings Generation
212
+
213
+ ```ruby
214
+ SmartPrompt.define_worker :text_embedder do
215
+ use "SiliconFlow"
216
+ model "BAAI/bge-m3"
217
+ prompt params[:text]
218
+ embeddings(params[:dimensions] || 1024)
219
+ end
220
+
221
+ # Usage
222
+ embeddings = engine.call_worker(:text_embedder, {
223
+ text: "Convert this text to embeddings",
224
+ dimensions: 1024
225
+ })
226
+ ```
227
+
228
+ ## 🏗️ Architecture Overview
229
+
230
+ SmartPrompt follows a modular architecture:
231
+
232
+ ```
233
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
234
+ │ Application │ │ SmartPrompt │ │ LLM Provider │
235
+ │ │◄──►│ Engine │◄──►│ (OpenAI/ │
236
+ │ │ │ │ │ Llama.cpp) │
237
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
238
+
239
+ ┌────────┼────────┐
240
+ │ │ │
241
+ ┌───▼───┐ ┌──▼──┐ ┌───▼────┐
242
+ │Workers│ │Conv.│ │Template│
243
+ │ │ │Mgmt │ │ System │
244
+ └───────┘ └─────┘ └────────┘
245
+ ```
246
+
247
+ ### Core Components
248
+
249
+ - **Engine**: Central orchestrator managing configuration, adapters, and workers
250
+ - **Workers**: Reusable task definitions with embedded business logic
251
+ - **Conversation**: Context and message history management
252
+ - **Adapters**: LLM provider integrations (OpenAI, Llama.cpp, etc.)
253
+ - **Templates**: ERB-based prompt template system
254
+
255
+ ## 🔧 Configuration Reference
256
+
257
+ ### Adapter Configuration
258
+
259
+ ```yaml
260
+ adapters:
261
+ openai: "OpenAIAdapter" # For OpenAI API
262
+ ```
263
+
264
+ ### LLM Configuration
265
+
266
+ ```yaml
267
+ llms:
268
+ model_name:
269
+ adapter: "adapter_name"
270
+ api_key: "your_api_key" # Can use ENV['KEY_NAME']
271
+ url: "https://api.url"
272
+ model: "model_identifier"
273
+ temperature: 0.7
274
+ # Additional provider-specific options
275
+ ```
276
+
277
+ ### Path Configuration
278
+
279
+ ```yaml
280
+ template_path: "./templates" # Directory for .erb templates
281
+ worker_path: "./workers" # Directory for worker definitions
282
+ logger_file: "./logs/app.log" # Log file location
283
+ ```
284
+
285
+ ## 🧪 Testing
286
+
287
+ Run the test suite:
288
+
289
+ ```bash
290
+ bundle exec rake test
291
+ ```
292
+
293
+ For development, you can use the console:
294
+
295
+ ```bash
296
+ bundle exec bin/console
297
+ ```
298
+
299
+ ## 🤝 Integration Examples
300
+
301
+ ### With Rails Applications
302
+
303
+ ```ruby
304
+ # config/initializers/smart_prompt.rb
305
+ class SmartPromptService
306
+ def self.engine
307
+ @engine ||= SmartPrompt::Engine.new(
308
+ Rails.root.join('config', 'smart_prompt.yml')
309
+ )
310
+ end
311
+
312
+ def self.chat(message, context: nil)
313
+ engine.call_worker(:chat_assistant, {
314
+ question: message,
315
+ context: context
316
+ })
317
+ end
318
+ end
319
+
320
+ # In your controller
321
+ class ChatController < ApplicationController
322
+ def create
323
+ response = SmartPromptService.chat(
324
+ params[:message],
325
+ context: session[:conversation_context]
326
+ )
327
+
328
+ render json: { response: response }
329
+ end
330
+ end
331
+ ```
332
+
333
+ ### With Sidekiq Background Jobs
334
+
335
+ ```ruby
336
+ class LLMProcessingJob < ApplicationJob
337
+ def perform(task_type, parameters)
338
+ engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
339
+ result = engine.call_worker(task_type.to_sym, parameters)
340
+
341
+ # Process result...
342
+ NotificationService.send_completion(result)
343
+ end
344
+ end
345
+ ```
346
+
347
+ ## 🚀 Real-world Use Cases
348
+
349
+ - **Chatbots and Conversational AI**: Build sophisticated chatbots with context awareness
350
+ - **Content Generation**: Automated content creation with template-driven prompts
351
+ - **Code Analysis**: AI-powered code review and documentation generation
352
+ - **Customer Support**: Intelligent ticket routing and response suggestions
353
+ - **Data Processing**: LLM-powered data extraction and transformation
354
+ - **Educational Tools**: AI tutors and learning assistance systems
355
+
356
+ ## 🛣️ Roadmap
357
+
358
+ - [ ] Additional LLM provider adapters (Anthropic Claude, Google PaLM)
359
+ - [ ] Visual prompt builder and management interface
360
+ - [ ] Enhanced caching and performance optimizations
361
+ - [ ] Integration with vector databases for RAG applications
362
+ - [ ] Built-in evaluation and testing framework for prompts
363
+ - [ ] Distributed worker execution support
364
+
365
+ ## 🤝 Contributing
366
+
367
+ We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
368
+
369
+ 1. Fork the repository
370
+ 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
371
+ 3. Commit your changes (`git commit -am 'Add amazing feature'`)
372
+ 4. Push to the branch (`git push origin feature/amazing-feature`)
373
+ 5. Open a Pull Request
374
+
375
+ ## 📄 License
376
+
377
+ This project is licensed under the MIT License - see the [LICENSE.txt](LICENSE.txt) file for details.
378
+
379
+ ## 🙏 Acknowledgments
380
+
381
+ - Built with ❤️ by the SmartPrompt team
382
+ - Inspired by the need for elegant LLM integration in Ruby applications
383
+ - Thanks to all contributors and the Ruby community
384
+
385
+ ## 📞 Support
386
+
387
+ - 📖 [Documentation](https://github.com/zhuangbiaowei/smart_prompt/wiki)
388
+ - 🐛 [Issue Tracker](https://github.com/zhuangbiaowei/smart_prompt/issues)
389
+ - 💬 [Discussions](https://github.com/zhuangbiaowei/smart_prompt/discussions)
390
+ - 📧 Email: zbw@kaiyuanshe.org
391
+
392
+ ---
28
393
 
29
- See project [SmartPromptDemo](https://github.com/zhuangbiaowei/SmartPromptDemo) for more examples.
394
+ **SmartPrompt** - Making LLM integration in Ruby applications simple, powerful, and elegant.
@@ -6,6 +6,7 @@ module SmartPrompt
6
6
  class Conversation
7
7
  include APIHandler
8
8
  attr_reader :messages, :last_response, :config_file
9
+ attr_reader :last_call_id
9
10
 
10
11
  def initialize(engine, tools = nil)
11
12
  SmartPrompt.logger.info "Create Conversation"
@@ -13,7 +14,9 @@ module SmartPrompt
13
14
  @engine = engine
14
15
  @adapters = engine.adapters
15
16
  @llms = engine.llms
17
+ @current_llm_name = nil
16
18
  @templates = engine.templates
19
+ @temperature = 0.7
17
20
  @current_adapter = engine.current_adapter
18
21
  @last_response = nil
19
22
  @tools = tools
@@ -22,11 +25,15 @@ module SmartPrompt
22
25
  def use(llm_name)
23
26
  raise "Adapter #{adapter_name} not configured" unless @llms.key?(llm_name)
24
27
  @current_llm = @llms[llm_name]
28
+ @current_llm_name = llm_name
25
29
  self
26
30
  end
27
31
 
28
32
  def model(model_name)
29
33
  @model_name = model_name
34
+ if @engine.config["better_prompt_db"]
35
+ BetterPrompt.add_model(@current_llm_name, @model_name)
36
+ end
30
37
  end
31
38
 
32
39
  def temperature(temperature)
@@ -51,9 +58,15 @@ module SmartPrompt
51
58
  raise "Template #{template_name} not found" unless @templates.key?(template_name)
52
59
  content = @templates[template_name].render(params)
53
60
  add_message({ role: "user", content: content }, with_history)
61
+ if @engine.config["better_prompt_db"]
62
+ BetterPrompt.add_prompt(template_name, "user", content)
63
+ end
54
64
  self
55
65
  else
56
66
  add_message({ role: "user", content: template_name }, with_history)
67
+ if @engine.config["better_prompt_db"]
68
+ BetterPrompt.add_prompt("NULL", "user", template_name)
69
+ end
57
70
  self
58
71
  end
59
72
  end
@@ -61,6 +74,9 @@ module SmartPrompt
61
74
  def sys_msg(message, params)
62
75
  @sys_msg = message
63
76
  add_message({ role: "system", content: message }, params[:with_history])
77
+ if @engine.config["better_prompt_db"]
78
+ BetterPrompt.add_prompt("NULL", "system", message)
79
+ end
64
80
  self
65
81
  end
66
82
 
@@ -75,6 +91,13 @@ module SmartPrompt
75
91
  def send_msg(params = {})
76
92
  Retriable.retriable(RETRY_OPTIONS) do
77
93
  raise ConfigurationError, "No LLM selected" if @current_llm.nil?
94
+ if @engine.config["better_prompt_db"]
95
+ if params[:with_history]
96
+ @last_call_id = BetterPrompt.add_model_call(@current_llm_name, @model_name, history_messages, false, @temperature, 0, 0.0, 0, @tools)
97
+ else
98
+ @last_call_id = BetterPrompt.add_model_call(@current_llm_name, @model_name, @messages, false, @temperature, 0, 0.0, 0, @tools)
99
+ end
100
+ end
78
101
  if params[:with_history]
79
102
  @last_response = @current_llm.send_request(history_messages, @model_name, @temperature, @tools, nil)
80
103
  else
@@ -83,6 +106,9 @@ module SmartPrompt
83
106
  if @last_response == ""
84
107
  @last_response = @current_llm.last_response
85
108
  end
109
+ if @engine.config["better_prompt_db"]
110
+ BetterPrompt.add_response(@last_call_id, @last_response, false)
111
+ end
86
112
  @messages = []
87
113
  @messages << { role: "system", content: @sys_msg }
88
114
  @last_response
@@ -94,11 +120,21 @@ module SmartPrompt
94
120
  def send_msg_by_stream(params = {}, &proc)
95
121
  Retriable.retriable(RETRY_OPTIONS) do
96
122
  raise ConfigurationError, "No LLM selected" if @current_llm.nil?
123
+ if @engine.config["better_prompt_db"]
124
+ if params[:with_history]
125
+ @last_call_id = BetterPrompt.add_model_call(@current_llm_name, @model_name, history_messages, true, @temperature, 0, 0.0, 0, @tools)
126
+ else
127
+ @last_call_id = BetterPrompt.add_model_call(@current_llm_name, @model_name, @messages, true, @temperature, 0, 0.0, 0, @tools)
128
+ end
129
+ end
97
130
  if params[:with_history]
98
131
  @current_llm.send_request(history_messages, @model_name, @temperature, @tools, proc)
99
132
  else
100
133
  @current_llm.send_request(@messages, @model_name, @temperature, @tools, proc)
101
134
  end
135
+ if @engine.config["better_prompt_db"]
136
+ BetterPrompt.add_response(@last_call_id, @engine.stream_response, true)
137
+ end
102
138
  @messages = []
103
139
  @messages << { role: "system", content: @sys_msg }
104
140
  end
@@ -64,6 +64,10 @@ module SmartPrompt
64
64
  SmartPrompt.logger = Logger.new(@config["logger_file"])
65
65
  end
66
66
  SmartPrompt.logger.info "Loading configuration from file: #{config_file}"
67
+ if @config["better_prompt_db"]
68
+ require "better_prompt"
69
+ BetterPrompt.setup(db_path: @config["better_prompt_db"])
70
+ end
67
71
  @config["adapters"].each do |adapter_name, adapter_class|
68
72
  adapter_class = SmartPrompt.const_get(adapter_class)
69
73
  @adapters[adapter_name] = adapter_class
@@ -1,3 +1,3 @@
1
1
  module SmartPrompt
2
- VERSION = "0.3.2"
2
+ VERSION = "0.3.3"
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: smart_prompt
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.2
4
+ version: 0.3.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - zhuang biaowei
@@ -137,7 +137,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
137
137
  - !ruby/object:Gem::Version
138
138
  version: '0'
139
139
  requirements: []
140
- rubygems_version: 3.6.8
140
+ rubygems_version: 3.6.9
141
141
  specification_version: 4
142
142
  summary: A smart prompt management and LLM interaction gem
143
143
  test_files: []