markdown-flow 0.2.10__py3-none-any.whl → 0.2.30__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: markdown-flow
3
- Version: 0.2.10
3
+ Version: 0.2.30
4
4
  Summary: An agent library designed to parse and process MarkdownFlow documents
5
5
  Project-URL: Homepage, https://github.com/ai-shifu/markdown-flow-agent-py
6
6
  Project-URL: Bug Tracker, https://github.com/ai-shifu/markdown-flow-agent-py/issues
@@ -92,36 +92,6 @@ for chunk in mf.process(
92
92
  print(chunk.content, end='')
93
93
  ```
94
94
 
95
- ### Dynamic Interaction Generation ✨
96
-
97
- Transform natural language content into interactive elements automatically:
98
-
99
- ```python
100
- from markdown_flow import MarkdownFlow, ProcessMode
101
-
102
- # Dynamic interaction generation works automatically
103
- mf = MarkdownFlow(
104
- document="询问用户的菜品偏好,并记录到变量{{菜品选择}}",
105
- llm_provider=llm_provider,
106
- document_prompt="你是中餐厅服务员,提供川菜、粤菜、鲁菜等选项"
107
- )
108
-
109
- # Process with Function Calling
110
- result = mf.process(0, ProcessMode.COMPLETE)
111
-
112
- if result.transformed_to_interaction:
113
- print(f"Generated interaction: {result.content}")
114
- # Output: ?[%{{菜品选择}} 宫保鸡丁||麻婆豆腐||水煮鱼||...其他菜品]
115
-
116
- # Continue with user input
117
- user_result = mf.process(
118
- block_index=0,
119
- mode=ProcessMode.COMPLETE,
120
- user_input={"菜品选择": ["宫保鸡丁", "麻婆豆腐"]},
121
- dynamic_interaction_format=result.content
122
- )
123
- ```
124
-
125
95
  ### Interactive Elements
126
96
 
127
97
  ```python
@@ -159,59 +129,6 @@ result = mf.process(
159
129
  )
160
130
  ```
161
131
 
162
- ## ✨ Key Features
163
-
164
- ### 🏗️ Three-Layer Architecture
165
-
166
- - **Document Level**: Parse `---` separators and `?[]` interaction patterns
167
- - **Block Level**: Categorize as CONTENT, INTERACTION, or PRESERVED_CONTENT
168
- - **Interaction Level**: Handle 6 different interaction types with smart validation
169
-
170
- ### 🔄 Dynamic Interaction Generation
171
-
172
- - **Natural Language Input**: Write content in plain language
173
- - **AI-Powered Conversion**: LLM automatically detects interaction needs using Function Calling
174
- - **Structured Data Generation**: LLM returns structured data, core builds MarkdownFlow format
175
- - **Language Agnostic**: Support for any language with proper document prompts
176
- - **Context Awareness**: Both original and resolved variable contexts provided to LLM
177
-
178
- ### 🤖 Unified LLM Integration
179
-
180
- - **Single Interface**: One `complete()` method for both regular and Function Calling modes
181
- - **Automatic Detection**: Tools parameter determines processing mode automatically
182
- - **Consistent Returns**: Always returns `LLMResult` with structured metadata
183
- - **Error Handling**: Automatic fallback from Function Calling to regular completion
184
- - **Provider Agnostic**: Abstract interface supports any LLM service
185
-
186
- ### 📝 Variable System
187
-
188
- - **Replaceable Variables**: `{{variable}}` for content personalization
189
- - **Preserved Variables**: `%{{variable}}` for LLM understanding in interactions
190
- - **Multi-Value Support**: Handle both single values and arrays
191
- - **Smart Extraction**: Automatic detection from document content
192
-
193
- ### 🎯 Interaction Types
194
-
195
- - **Text Input**: `?[%{{var}}...question]` - Free text entry
196
- - **Single Select**: `?[%{{var}} A|B|C]` - Choose one option
197
- - **Multi Select**: `?[%{{var}} A||B||C]` - Choose multiple options
198
- - **Mixed Mode**: `?[%{{var}} A||B||...custom]` - Predefined + custom input
199
- - **Display Buttons**: `?[Continue|Cancel]` - Action buttons without assignment
200
- - **Value Separation**: `?[%{{var}} Display//value|...]` - Different display/stored values
201
-
202
- ### 🔒 Content Preservation
203
-
204
- - **Multiline Format**: `!===content!===` blocks output exactly as written
205
- - **Inline Format**: `===content===` for single-line preserved content
206
- - **Variable Support**: Preserved content can contain variables for substitution
207
-
208
- ### ⚡ Performance Optimized
209
-
210
- - **Pre-compiled Regex**: All patterns compiled once for maximum performance
211
- - **Synchronous Interface**: Clean synchronous operations with optional streaming
212
- - **Stream Processing**: Real-time streaming responses supported
213
- - **Memory Efficient**: Lazy evaluation and generator patterns
214
-
215
132
  ## 📖 API Reference
216
133
 
217
134
  ### Core Classes
@@ -237,7 +154,7 @@ class MarkdownFlow:
237
154
  mode: ProcessMode = ProcessMode.COMPLETE,
238
155
  variables: Optional[Dict[str, str]] = None,
239
156
  user_input: Optional[str] = None
240
- ) -> LLMResult: ...
157
+ ) -> LLMResult | Generator[LLMResult, None, None]: ...
241
158
  ```
242
159
 
243
160
  **Methods:**
@@ -267,18 +184,13 @@ Processing mode enumeration for different use cases.
267
184
 
268
185
  ```python
269
186
  class ProcessMode(Enum):
270
- PROMPT_ONLY = "prompt_only" # Generate prompts without LLM calls
271
- COMPLETE = "complete" # Non-streaming LLM processing
272
- STREAM = "stream" # Streaming LLM responses
187
+ COMPLETE = "complete" # Non-streaming LLM processing
188
+ STREAM = "stream" # Streaming LLM responses
273
189
  ```
274
190
 
275
191
  **Usage:**
276
192
 
277
193
  ```python
278
- # Generate prompt only
279
- prompt_result = mf.process(0, ProcessMode.PROMPT_ONLY)
280
- print(prompt_result.content) # Raw prompt text
281
-
282
194
  # Complete response
283
195
  complete_result = mf.process(0, ProcessMode.COMPLETE)
284
196
  print(complete_result.content) # Full LLM response
@@ -298,10 +210,10 @@ from typing import Generator
298
210
 
299
211
  class LLMProvider(ABC):
300
212
  @abstractmethod
301
- def complete(self, prompt: str) -> LLMResult: ...
213
+ def complete(self, messages: list[dict[str, str]]) -> str: ...
302
214
 
303
215
  @abstractmethod
304
- def stream(self, prompt: str) -> Generator[str, None, None]: ...
216
+ def stream(self, messages: list[dict[str, str]]) -> Generator[str, None, None]: ...
305
217
  ```
306
218
 
307
219
  **Custom Implementation:**
@@ -311,23 +223,22 @@ class OpenAIProvider(LLMProvider):
311
223
  def __init__(self, api_key: str):
312
224
  self.client = openai.OpenAI(api_key=api_key)
313
225
 
314
- def complete(self, prompt: str) -> LLMResult:
315
- response = self.client.completions.create(
226
+ def complete(self, messages: list[dict[str, str]]) -> str:
227
+ response = self.client.chat.completions.create(
316
228
  model="gpt-3.5-turbo",
317
- prompt=prompt,
318
- max_tokens=500
229
+ messages=messages
319
230
  )
320
- return LLMResult(content=response.choices[0].text.strip())
231
+ return response.choices[0].message.content
321
232
 
322
- def stream(self, prompt: str):
323
- stream = self.client.completions.create(
233
+ def stream(self, messages: list[dict[str, str]]):
234
+ stream = self.client.chat.completions.create(
324
235
  model="gpt-3.5-turbo",
325
- prompt=prompt,
236
+ messages=messages,
326
237
  stream=True
327
238
  )
328
239
  for chunk in stream:
329
- if chunk.choices[0].text:
330
- yield chunk.choices[0].text
240
+ if chunk.choices[0].delta.content:
241
+ yield chunk.choices[0].delta.content
331
242
  ```
332
243
 
333
244
  ### Block Types
@@ -647,7 +558,7 @@ Include code samples, explanations, and practice exercises.
647
558
  mode=ProcessMode.STREAM,
648
559
  variables={
649
560
  'user_name': 'developer',
650
- 'topic': 'async programming'
561
+ 'topic': 'synchronous programming'
651
562
  }
652
563
  ):
653
564
  content += chunk.content
@@ -701,9 +612,9 @@ class InteractiveDocumentBuilder:
701
612
  self.user_responses.update(response)
702
613
 
703
614
  def handle_interaction(self, interaction_content: str):
704
- from markdown_flow.utils import InteractionParser
615
+ from markdown_flow.parser import InteractionParser
705
616
 
706
- interaction = InteractionParser.parse(interaction_content)
617
+ interaction = InteractionParser().parse(interaction_content)
707
618
  print(f"\n{interaction_content}")
708
619
 
709
620
  if interaction.name == "BUTTONS_ONLY":
@@ -0,0 +1,24 @@
1
+ markdown_flow/__init__.py,sha256=E-P0SvBLJKQSwj2ZijjjXsDFl9axsrX8oTfTl7YBO7w,2808
2
+ markdown_flow/constants.py,sha256=aroEBhrOGY6JxofRxPFe87vesEJFY1Srm0_jRHVdtig,14274
3
+ markdown_flow/core.py,sha256=bwUdblJPPWqV_Utwn3ijmjpauI_FXcZUlVG30BIa7nw,47831
4
+ markdown_flow/enums.py,sha256=Wr41zt0Ce5b3fyLtOTE2erEVp1n92g9OVaBF_BZg_l8,820
5
+ markdown_flow/exceptions.py,sha256=9sUZ-Jd3CPLdSRqG8Pw7eMm7cv_S3VZM6jmjUU8OhIc,976
6
+ markdown_flow/llm.py,sha256=MJRbXKj35AjLCAhWpFhS07s-m3YU2qO1HOFff05HG2I,2239
7
+ markdown_flow/models.py,sha256=EJtZ-EQffKHmNbKJXVsHxGXHV0ywOgdAzopWDSjyVmM,2417
8
+ markdown_flow/utils.py,sha256=TlQan3rNIcbIzgOa-kpphFKpw9IXblFKhIesac_lu3Y,28769
9
+ markdown_flow/parser/__init__.py,sha256=zhLc8m7OvkdKk7K70Db9u3EgTGGcPG1_Cxj5EC2Fnwo,1144
10
+ markdown_flow/parser/code_fence_utils.py,sha256=DdkZDTXSCNMfDfODYVOopWd4-5Enci5siplt8JTFs1g,5074
11
+ markdown_flow/parser/interaction.py,sha256=T4W7iO-iyNJnpM7SmvOH_DRlLuWSDcFyIrN2fH6cv7w,12653
12
+ markdown_flow/parser/json_parser.py,sha256=78GhyyOjlg0l4UmKKNc4zrg-4pSHzrJEt7VKqbz3uyE,1305
13
+ markdown_flow/parser/output.py,sha256=LgxvH6-RINM50p58miQtw_fHER1JEWDGucHk5-sZ-gk,8087
14
+ markdown_flow/parser/preprocessor.py,sha256=YO2znQo7biYAxZZIO5oyrH4h88LZPIe3SidX7ZEOS88,4877
15
+ markdown_flow/parser/validation.py,sha256=fc5-zL4_vsgFQuQ0BHXlJRH5Vkx102SKJy-H72tpLK8,3647
16
+ markdown_flow/parser/variable.py,sha256=eJLbVOyZT8uYM5eJNv5kHLqdRoNz5iNlxHhhi2oDW94,2986
17
+ markdown_flow/providers/__init__.py,sha256=QMr8H9gxoLr6pWXoAb11oZX_She6KWPxnRips537nQ4,319
18
+ markdown_flow/providers/config.py,sha256=Y4Nihqj3KxI6_RyvVKF_mv4mBoPNXeLgYQgv0FqxQfU,2057
19
+ markdown_flow/providers/openai.py,sha256=KgExRJ8QsCeU_c-Yx3IhxG2hBbYN5uZ-uf0VTMvD1LE,12326
20
+ markdown_flow-0.2.30.dist-info/licenses/LICENSE,sha256=qz3BziejhHPd1xa5eVtYEU5Qp6L2pn4otuj194uGxmc,1069
21
+ markdown_flow-0.2.30.dist-info/METADATA,sha256=LEehzrEIw6Q_TyvGFuAG-m0fPEKgO-QcaLgD8LcvyYM,20686
22
+ markdown_flow-0.2.30.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
23
+ markdown_flow-0.2.30.dist-info/top_level.txt,sha256=DpigGvQuIt2L0TTLnDU5sylhiTGiZS7MmAMa2hi-AJs,14
24
+ markdown_flow-0.2.30.dist-info/RECORD,,
@@ -1,13 +0,0 @@
1
- markdown_flow/__init__.py,sha256=BVUJBlr7xrT6ctNgYHPFynvY7bPXcptugNDAsTp4oJU,2875
2
- markdown_flow/constants.py,sha256=pd_KCpTEVlz_IXYekrByqb9VWCQR_XHXoGsFYdLW1Eg,8006
3
- markdown_flow/core.py,sha256=HNeSbVBuiu03NF6m_jSo1o7V-W_FbdPsth62VLsG7Nw,49170
4
- markdown_flow/enums.py,sha256=Wr41zt0Ce5b3fyLtOTE2erEVp1n92g9OVaBF_BZg_l8,820
5
- markdown_flow/exceptions.py,sha256=9sUZ-Jd3CPLdSRqG8Pw7eMm7cv_S3VZM6jmjUU8OhIc,976
6
- markdown_flow/llm.py,sha256=7DjOL2h2N1g0L4NF9kn0M5mR45ZL0vPsW3TzuOGy1bw,2547
7
- markdown_flow/models.py,sha256=ENcvXMVXwpFN-RzbeVHhXTjBN0bbmRpJ96K-XS2rizI,2893
8
- markdown_flow/utils.py,sha256=cVi0zDRK_rCMAr3EDhgITmx6Po5fSvYjqrprYaitYE0,28450
9
- markdown_flow-0.2.10.dist-info/licenses/LICENSE,sha256=qz3BziejhHPd1xa5eVtYEU5Qp6L2pn4otuj194uGxmc,1069
10
- markdown_flow-0.2.10.dist-info/METADATA,sha256=QLL_76xo6kwnmH6_6RdpZ5NuFT8MTAZavmHpCFPQ54A,24287
11
- markdown_flow-0.2.10.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
12
- markdown_flow-0.2.10.dist-info/top_level.txt,sha256=DpigGvQuIt2L0TTLnDU5sylhiTGiZS7MmAMa2hi-AJs,14
13
- markdown_flow-0.2.10.dist-info/RECORD,,