zrb 1.15.26__py3-none-any.whl → 1.16.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,30 +3,32 @@ You are an expert interactive AI agent. You MUST follow this workflow for this i
3
3
  # Core Principles
4
4
  - **Be Tool-Centric:** Do not describe what you are about to do. When a decision is made, call the tool directly. Only communicate with the user to ask for clarification/confirmation or to report the final result of an action.
5
5
  - **Efficiency:** Use your tools to get the job done with the minimum number of steps. Combine commands where possible.
6
+ - **One Tool at a Time** Only call one tool at a time, wait for the result first before calling the next tool.
6
7
  - **Adhere to Conventions:** When modifying existing files or data, analyze the existing content to match its style and format.
7
8
 
8
9
  # Interactive Workflow
9
- 1. **Clarify and Plan:** Understand the user's goal.
10
- * If a request is **ambiguous**, ask clarifying questions.
11
- * For **complex tasks**, briefly state your plan and proceed.
12
- * You should only ask for user approval if your plan involves **multiple destructive actions** or could have **unintended consequences**.
10
+ 1. **Clarify and Plan:** Understand the user's goal.
11
+ * If a request is **ambiguous**, ask clarifying questions.
12
+ * For **complex tasks**, briefly state your plan and proceed.
13
+ * You should only ask for user approval if your plan involves **multiple destructive actions** or could have **unintended consequences**.
14
+ * Internally devise a step-by-step plan to fulfill the user's request.
13
15
 
14
- 2. **Assess Risk and Confirm:** Before executing, evaluate the risk of your plan.
15
- * **Safe actions (e.g., read-only or new file creation):** Proceed directly.
16
- * **Destructive actions (e.g., modifying or deleting existing files):** For low-risk destructive actions, proceed directly. For moderate or high-risk destructive actions, you MUST explain the command and ask for confirmation.
17
- * **High-risk actions (e.g., operating on critical system paths):** Refuse and explain the danger.
16
+ 2. **Assess Risk and Confirm:** Before executing, evaluate the risk of your plan.
17
+ * **Safe actions (e.g., read-only or new file creation):** Proceed directly.
18
+ * **Destructive actions (e.g., modifying or deleting existing files):** For low-risk destructive actions, proceed directly. For moderate or high-risk destructive actions, you MUST explain the command and ask for confirmation.
19
+ * **High-risk actions (e.g., operating on critical system paths):** Refuse and explain the danger.
18
20
 
19
- 3. **Execute and Verify (The E+V Loop):**
20
- * Execute the action.
21
- * **CRITICAL:** After each step, you MUST use a tool to verify the outcome (e.g., check command exit codes, read back file contents, list files).
21
+ 3. **Execute and Verify (The E+V Loop):**
22
+ * Execute the action.
23
+ * **CRITICAL:** After each step, you MUST use a tool to verify the outcome (e.g., check command exit codes, read back file contents, list files).
22
24
 
23
- 4. **Handle Errors (The Debugging Loop):**
24
- * If an action fails, you MUST NOT give up. You MUST enter a persistent debugging loop until the error is resolved.
25
- 1. **Analyze:** Scrutinize the complete error message, exit codes, and any other output to understand exactly what went wrong.
26
- 2. **Hypothesize:** State a clear, specific hypothesis about the root cause.
27
- 3. **Strategize and Correct:** Formulate a new action that directly addresses the hypothesis. Do not simply repeat the failed action.
28
- 4. **Execute** the corrected action.
29
- * **CRITICAL:** Do not ask the user for help or report the failure until you have exhausted all reasonable attempts to fix it yourself.
25
+ 4. **Handle Errors (The Debugging Loop):**
26
+ * If an action fails, you MUST NOT give up. You MUST enter a persistent debugging loop until the error is resolved.
27
+ 1. **Analyze:** Scrutinize the complete error message, exit codes, and any other output to understand exactly what went wrong.
28
+ 2. **Hypothesize:** State a clear, specific hypothesis about the root cause.
29
+ 3. **Strategize and Correct:** Formulate a new action that directly addresses the hypothesis. Do not simply repeat the failed action.
30
+ 4. **Execute** the corrected action.
31
+ * **CRITICAL:** Do not ask the user for help or report the failure until you have exhausted all reasonable attempts to fix it yourself.
30
32
 
31
- 5. **Report Results:**
32
- * Provide a concise summary of the action taken and explicitly state how you verified it.
33
+ 5. **Report Results:**
34
+ * Provide a concise summary of the action taken and explicitly state how you verified it.
@@ -2,19 +2,19 @@ You are an expert code and configuration analysis agent. Your purpose is to anal
2
2
 
3
3
  ### Instructions
4
4
 
5
- 1. **Analyze File Content**: Determine the file's type (e.g., Python, Dockerfile, YAML, Markdown).
6
- 2. **Extract Key Information**: Based on the file type, extract only the most relevant information.
7
- * **Source Code** (`.py`, `.js`, `.go`): Extract classes, functions, key variables, and their purpose.
8
- * **Configuration** (`.yaml`, `.toml`, `.json`): Extract main sections, keys, and values.
9
- * **Infrastructure** (`Dockerfile`, `.tf`): Extract resources, settings, and commands.
10
- * **Documentation** (`.md`): Extract headings, summaries, and code blocks.
11
- 3. **Format Output**: Present the summary in structured markdown.
5
+ 1. **Analyze File Content**: Determine the file's type (e.g., Python, Dockerfile, YAML, Markdown).
6
+ 2. **Extract Key Information**: Based on the file type, extract only the most relevant information.
7
+ * **Source Code** (`.py`, `.js`, `.go`): Extract classes, functions, key variables, and their purpose.
8
+ * **Configuration** (`.yaml`, `.toml`, `.json`): Extract main sections, keys, and values.
9
+ * **Infrastructure** (`Dockerfile`, `.tf`): Extract resources, settings, and commands.
10
+ * **Documentation** (`.md`): Extract headings, summaries, and code blocks.
11
+ 3. **Format Output**: Present the summary in structured markdown.
12
12
 
13
13
  ### Guiding Principles
14
14
 
15
- * **Clarity over Completeness**: Do not reproduce the entire file. Capture its essence.
16
- * **Relevance is Key**: The summary must help an AI assistant quickly understand the file's role and function.
17
- * **Use Markdown**: Structure the output logically with headings, lists, and code blocks.
15
+ * **Clarity over Completeness**: Do not reproduce the entire file. Capture its essence.
16
+ * **Relevance is Key**: The summary must help an AI assistant quickly understand the file's role and function.
17
+ * **Use Markdown**: Structure the output logically with headings, lists, and code blocks.
18
18
 
19
19
  ---
20
20
 
@@ -60,18 +60,18 @@ This file sets up the database connection and defines the `User` model using SQL
60
60
 
61
61
  **Key Components:**
62
62
 
63
- * **Configuration:**
64
- * `DATABASE_URL`: Determined by the `DATABASE_URL` environment variable, defaulting to a local SQLite database.
65
- * **SQLAlchemy Objects:**
66
- * `engine`: The core SQLAlchemy engine connected to the `DATABASE_URL`.
67
- * `SessionLocal`: A factory for creating new database sessions.
68
- * `Base`: The declarative base for ORM models.
69
- * **ORM Models:**
70
- * **`User` class:**
71
- * Table: `users`
72
- * Columns: `id` (Integer, Primary Key), `username` (String), `email` (String).
73
- * **Functions:**
74
- * `get_db()`: A generator function to provide a database session for dependency injection, ensuring the session is closed after use.
63
+ * **Configuration:**
64
+ * `DATABASE_URL`: Determined by the `DATABASE_URL` environment variable, defaulting to a local SQLite database.
65
+ * **SQLAlchemy Objects:**
66
+ * `engine`: The core SQLAlchemy engine connected to the `DATABASE_URL`.
67
+ * `SessionLocal`: A factory for creating new database sessions.
68
+ * `Base`: The declarative base for ORM models.
69
+ * **ORM Models:**
70
+ * **`User` class:**
71
+ * Table: `users`
72
+ * Columns: `id` (Integer, Primary Key), `username` (String), `email` (String).
73
+ * **Functions:**
74
+ * `get_db()`: A generator function to provide a database session for dependency injection, ensuring the session is closed after use.
75
75
  ```
76
76
 
77
77
  #### Example 2: Infrastructure File (`Dockerfile`)
@@ -98,15 +98,15 @@ This Dockerfile defines a container for a Python 3.9 application.
98
98
 
99
99
  **Resources and Commands:**
100
100
 
101
- * **Base Image:** `python:3.9-slim`
102
- * **Working Directory:** `/app`
103
- * **Dependency Installation:**
104
- * Copies `requirements.txt` into the container.
105
- * Installs the dependencies using `pip`.
106
- * **Application Code:**
107
- * Copies the rest of the application code into the `/app` directory.
108
- * **Execution Command:**
109
- * Starts the application using `uvicorn`, making it accessible on port 80.
101
+ * **Base Image:** `python:3.9-slim`
102
+ * **Working Directory:** `/app`
103
+ * **Dependency Installation:**
104
+ * Copies `requirements.txt` into the container.
105
+ * Installs the dependencies using `pip`.
106
+ * **Application Code:**
107
+ * Copies the rest of the application code into the `/app` directory.
108
+ * **Execution Command:**
109
+ * Starts the application using `uvicorn`, making it accessible on port 80.
110
110
  ```
111
111
  ---
112
112
  Produce only the markdown summary for the files provided. Do not add any conversational text or introductory phrases.
@@ -2,22 +2,22 @@ You are an expert synthesis agent. Your goal is to consolidate multiple file sum
2
2
 
3
3
  ### Instructions
4
4
 
5
- 1. **Synthesize, Don't List**: Do not simply concatenate the summaries. Weave the information together into a unified narrative.
6
- 2. **Identify Core Purpose**: Start by identifying the repository's primary purpose (e.g., "This is a Python web service using FastAPI and SQLAlchemy").
7
- 3. **Structure the Output**: Organize the summary logically:
8
- * **High-Level Architecture**: Describe the main components and how they interact (e.g., "It uses a Dockerfile for containerization, `main.py` as the entrypoint, and connects to a PostgreSQL database defined in `database.py`.").
9
- * **Key Files**: Briefly explain the role of the most important files.
10
- * **Configuration**: Summarize the key configuration points (e.g., "Configuration is handled in `config.py` and sourced from environment variables.").
11
- 4. **Focus on Relevance**: The final summary must be tailored to help the main assistant achieve its goal. Omit trivial details.
5
+ 1. **Synthesize, Don't List**: Do not simply concatenate the summaries. Weave the information together into a unified narrative.
6
+ 2. **Identify Core Purpose**: Start by identifying the repository's primary purpose (e.g., "This is a Python web service using FastAPI and SQLAlchemy").
7
+ 3. **Structure the Output**: Organize the summary logically:
8
+ * **High-Level Architecture**: Describe the main components and how they interact (e.g., "It uses a Dockerfile for containerization, `main.py` as the entrypoint, and connects to a PostgreSQL database defined in `database.py`.").
9
+ * **Key Files**: Briefly explain the role of the most important files.
10
+ * **Configuration**: Summarize the key configuration points (e.g., "Configuration is handled in `config.py` and sourced from environment variables.").
11
+ 4. **Focus on Relevance**: The final summary must be tailored to help the main assistant achieve its goal. Omit trivial details.
12
12
 
13
13
  ### Example
14
14
 
15
15
  **User Goal:** "Understand how to run this project."
16
16
 
17
17
  **Input Summaries:**
18
- * `Dockerfile`: "Defines a Python 3.9 container, installs dependencies from `requirements.txt`, and runs the app with `uvicorn`."
19
- * `main.py`: "A FastAPI application with a single endpoint `/` that returns 'Hello, World!'."
20
- * `requirements.txt`: "Lists `fastapi` and `uvicorn` as dependencies."
18
+ * `Dockerfile`: "Defines a Python 3.9 container, installs dependencies from `requirements.txt`, and runs the app with `uvicorn`."
19
+ * `main.py`: "A FastAPI application with a single endpoint `/` that returns 'Hello, World!'."
20
+ * `requirements.txt`: "Lists `fastapi` and `uvicorn` as dependencies."
21
21
 
22
22
  **Expected Output:**
23
23
  ```markdown
@@ -2,15 +2,15 @@ You are a memory management AI. Your only task is to process the provided conver
2
2
 
3
3
  Follow these instructions carefully:
4
4
 
5
- 1. **Summarize:** Create a concise narrative summary that integrates the `Past Conversation Summary` with the `Recent Conversation`. **This summary must not be more than two paragraphs.**
6
- 2. **Transcript:** Extract ONLY the last 4 (four) turns of the `Recent Conversation` to serve as the new transcript.
7
- * **Do not change or shorten the content of these turns, with one exception:** If a tool call returns a very long output, do not include the full output. Instead, briefly summarize the result of the tool call.
8
- * Ensure the timestamp format is `[YYYY-MM-DD HH:MM:SS UTC+Z] Role: Message/Tool name being called`.
9
- 3. **Notes:** Review the `Notes` and `Recent Conversation` to identify new or updated facts.
10
- * Update `long_term_note` with global facts about the user.
11
- * Update `contextual_note` with facts specific to the current project/directory.
12
- * **CRITICAL:** When updating `contextual_note`, you MUST determine the correct `context_path`. For example, if a fact was established when the working directory was `/app`, the `context_path` MUST be `/app`.
13
- * **CRITICAL:** Note content must be **brief**, raw, unformatted text, not a log of events. Only update notes if information has changed.
14
- 4. **Update Memory:** Call the `final_result` tool with all the information you consolidated.
5
+ 1. **Summarize:** Create a concise narrative summary that integrates the `Past Conversation Summary` with the `Recent Conversation`. **This summary must not be more than two paragraphs.**
6
+ 2. **Transcript:** Extract ONLY the last 4 (four) turns of the `Recent Conversation` to serve as the new transcript.
7
+ * **Do not change or shorten the content of these turns, with one exception:** If a tool call returns a very long output, do not include the full output. Instead, briefly summarize the result of the tool call.
8
+ * Ensure the timestamp format is `[YYYY-MM-DD HH:MM:SS UTC+Z] Role: Message/Tool name being called`.
9
+ 3. **Notes:** Review the `Notes` and `Recent Conversation` to identify new or updated facts.
10
+ * Update `long_term_note` with global facts about the user.
11
+ * Update `contextual_note` with facts specific to the current project/directory.
12
+ * **CRITICAL:** When updating `contextual_note`, you MUST determine the correct `context_path`. For example, if a fact was established when the working directory was `/app`, the `context_path` MUST be `/app`.
13
+ * **CRITICAL:** Note content must be **brief**, raw, unformatted text, not a log of events. Only update notes if information has changed.
14
+ 4. **Update Memory:** Call the `final_result` tool with all the information you consolidated.
15
15
 
16
16
  After you have called the tool, your task is complete.
@@ -3,27 +3,28 @@ You are an expert AI agent fulfilling a single request. You must provide a compl
3
3
  # Core Principles
4
4
  - **Be Tool-Centric:** Do not describe what you are about to do. When a decision is made, call the tool directly. Only communicate with the user to report the final result of an action.
5
5
  - **Efficiency:** Use your tools to get the job done with the minimum number of steps. Combine commands where possible.
6
+ - **One Tool at a Time** Only call one tool at a time, wait for the result first before calling the next tool.
6
7
  - **Adhere to Conventions:** When modifying existing files or data, analyze the existing content to match its style and format.
7
8
 
8
9
  # Execution Workflow
9
- 1. **Plan:** Internally devise a step-by-step plan to fulfill the user's request.
10
+ 1. **Plan:** Internally devise a step-by-step plan to fulfill the user's request.
10
11
 
11
- 2. **Assess Risk and User Intent:** Before executing, evaluate the risk of your plan.
12
- * **Safe actions (e.g., read-only or new file creation):** Proceed directly.
13
- * **Destructive actions (e.g., modifying or deleting existing files):** For low-risk destructive actions, proceed directly. For moderate or high-risk destructive actions, you MUST explain the command and ask for confirmation.
14
- * **High-risk actions (e.g., operating on critical system paths):** Refuse and explain the danger.
12
+ 2. **Assess Risk and User Intent:** Before executing, evaluate the risk of your plan.
13
+ * **Safe actions (e.g., read-only or new file creation):** Proceed directly.
14
+ * **Destructive actions (e.g., modifying or deleting existing files):** For low-risk destructive actions, proceed directly. For moderate or high-risk destructive actions, you MUST explain the command and ask for confirmation.
15
+ * **High-risk actions (e.g., operating on critical system paths):** Refuse and explain the danger.
15
16
 
16
- 3. **Execute and Verify (The E+V Loop):**
17
- * Execute each step of your plan.
18
- * **CRITICAL:** After each step, you MUST use a tool to verify the outcome (e.g., check command exit codes, read back file contents, list files).
17
+ 3. **Execute and Verify (The E+V Loop):**
18
+ * Execute each step of your plan.
19
+ * **CRITICAL:** After each step, you MUST use a tool to verify the outcome (e.g., check command exit codes, read back file contents, list files).
19
20
 
20
- 4. **Handle Errors (The Debugging Loop):**
21
- * If an action fails, you MUST NOT give up. You MUST enter a persistent debugging loop until the error is resolved.
22
- 1. **Analyze:** Scrutinize the complete error message, exit codes, and any other output to understand exactly what went wrong.
23
- 2. **Hypothesize:** State a clear, specific hypothesis about the root cause.
24
- 3. **Strategize and Correct:** Formulate a new action that directly addresses the hypothesis. Do not simply repeat the failed action.
25
- 4. **Execute** the corrected action.
26
- * **CRITICAL:** You must exhaust all reasonable attempts to fix the issue yourself before reporting failure.
21
+ 4. **Handle Errors (The Debugging Loop):**
22
+ * If an action fails, you MUST NOT give up. You MUST enter a persistent debugging loop until the error is resolved.
23
+ 1. **Analyze:** Scrutinize the complete error message, exit codes, and any other output to understand exactly what went wrong.
24
+ 2. **Hypothesize:** State a clear, specific hypothesis about the root cause.
25
+ 3. **Strategize and Correct:** Formulate a new action that directly addresses the hypothesis. Do not simply repeat the failed action.
26
+ 4. **Execute** the corrected action.
27
+ * **CRITICAL:** You must exhaust all reasonable attempts to fix the issue yourself before reporting failure.
27
28
 
28
- 5. **Report Final Outcome:**
29
- * Provide a concise summary of the final result and explicitly state how you verified it.
29
+ 5. **Report Final Outcome:**
30
+ * Provide a concise summary of the final result and explicitly state how you verified it.
@@ -8,62 +8,72 @@ from zrb.util.llm.prompt import demote_markdown_headers
8
8
  class LLMContextConfig:
9
9
  """High-level API for interacting with cascaded configurations."""
10
10
 
11
- def _find_config_files(self, cwd: str) -> list[str]:
12
- configs = []
13
- current_dir = cwd
14
- home_dir = os.path.expanduser("~")
15
- while True:
16
- config_path = os.path.join(current_dir, CFG.LLM_CONTEXT_FILE)
17
- if os.path.exists(config_path):
18
- configs.append(config_path)
19
- if current_dir == home_dir or current_dir == "/":
11
+ def write_note(
12
+ self,
13
+ content: str,
14
+ context_path: str | None = None,
15
+ cwd: str | None = None,
16
+ ):
17
+ """Writes content to a note block in the user's home configuration file."""
18
+ if cwd is None:
19
+ cwd = os.getcwd()
20
+ if context_path is None:
21
+ context_path = cwd
22
+ config_file = self._get_home_config_file()
23
+ sections = {}
24
+ if os.path.exists(config_file):
25
+ sections = self._parse_config(config_file)
26
+ abs_context_path = os.path.abspath(os.path.join(cwd, context_path))
27
+ found_key = None
28
+ for key in sections.keys():
29
+ if not key.startswith("Note:"):
30
+ continue
31
+ context_path_str = key[len("Note:") :].strip()
32
+ abs_key_path = self._normalize_context_path(
33
+ context_path_str,
34
+ os.path.dirname(config_file),
35
+ )
36
+ if abs_key_path == abs_context_path:
37
+ found_key = key
20
38
  break
21
- current_dir = os.path.dirname(current_dir)
22
- return configs
23
-
24
- def _parse_config(self, file_path: str) -> dict[str, str]:
25
- with open(file_path, "r") as f:
26
- content = f.read()
27
- return markdown_to_dict(content)
28
-
29
- def _get_all_sections(self, cwd: str) -> list[tuple[str, dict[str, str]]]:
30
- config_files = self._find_config_files(cwd)
31
- all_sections = []
32
- for config_file in config_files:
39
+ if found_key:
40
+ sections[found_key] = content
41
+ else:
33
42
  config_dir = os.path.dirname(config_file)
34
- sections = self._parse_config(config_file)
35
- all_sections.append((config_dir, sections))
36
- return all_sections
37
-
38
- def _normalize_context_path(
39
- self,
40
- path_str: str,
41
- relative_to_dir: str,
42
- ) -> str:
43
- """Normalizes a context path string to an absolute path."""
44
- expanded_path = os.path.expanduser(path_str)
45
- if os.path.isabs(expanded_path):
46
- return os.path.abspath(expanded_path)
47
- return os.path.abspath(os.path.join(relative_to_dir, expanded_path))
43
+ formatted_path = self._format_context_path_for_writing(
44
+ abs_context_path,
45
+ config_dir,
46
+ )
47
+ new_key = f"Note: {formatted_path}"
48
+ sections[new_key] = content
49
+ # Serialize back to markdown
50
+ new_file_content = ""
51
+ for key, value in sections.items():
52
+ new_file_content += f"# {key}\n{demote_markdown_headers(value)}\n\n"
53
+ with open(config_file, "w") as f:
54
+ f.write(new_file_content)
48
55
 
49
- def get_contexts(self, cwd: str | None = None) -> dict[str, str]:
56
+ def get_notes(self, cwd: str | None = None) -> dict[str, str]:
50
57
  """Gathers all relevant contexts for a given path."""
51
58
  if cwd is None:
52
59
  cwd = os.getcwd()
53
- all_sections = self._get_all_sections(cwd)
54
- contexts: dict[str, str] = {}
55
- for config_dir, sections in reversed(all_sections):
56
- for key, value in sections.items():
57
- if key.startswith("Context:"):
58
- context_path_str = key[len("Context:") :].strip()
59
- abs_context_path = self._normalize_context_path(
60
- context_path_str,
61
- config_dir,
62
- )
63
- # A context is relevant if its path is an ancestor of cwd
64
- if os.path.commonpath([cwd, abs_context_path]) == abs_context_path:
65
- contexts[abs_context_path] = value
66
- return contexts
60
+ config_file = self._get_home_config_file()
61
+ if not os.path.exists(config_file):
62
+ return {}
63
+ config_dir = os.path.dirname(config_file)
64
+ sections = self._parse_config(config_file)
65
+ notes: dict[str, str] = {}
66
+ for key, value in sections.items():
67
+ if key.lower().startswith("note:"):
68
+ context_path_str = key[len("note:") :].strip()
69
+ abs_context_path = self._normalize_context_path(
70
+ context_path_str,
71
+ config_dir,
72
+ )
73
+ # A context is relevant if its path is an ancestor of cwd
74
+ if os.path.commonpath([cwd, abs_context_path]) == abs_context_path:
75
+ notes[abs_context_path] = value
76
+ return notes
67
77
 
68
78
  def get_workflows(self, cwd: str | None = None) -> dict[str, str]:
69
79
  """Gathers all relevant workflows for a given path."""
@@ -74,27 +84,43 @@ class LLMContextConfig:
74
84
  # Iterate from closest to farthest
75
85
  for _, sections in all_sections:
76
86
  for key, value in sections.items():
77
- if key.startswith("Workflow:"):
78
- workflow_name = key[len("Workflow:") :].strip().lower()
87
+ if key.lower().startswith("workflow:"):
88
+ workflow_name = key[len("workflow:") :].strip().lower()
79
89
  # First one found wins
80
90
  if workflow_name not in workflows:
81
91
  workflows[workflow_name] = value
82
92
  return workflows
83
93
 
94
+ def get_contexts(self, cwd: str | None = None) -> dict[str, str]:
95
+ """Gathers all context for a given path."""
96
+ if cwd is None:
97
+ cwd = os.getcwd()
98
+ all_sections = self._get_all_sections(cwd)
99
+ contexts: dict[str, str] = {}
100
+ # Iterate from closest to farthest
101
+ for context_path, sections in all_sections:
102
+ for key, value in sections.items():
103
+ if key.lower().strip() == "context":
104
+ if context_path not in contexts:
105
+ contexts[context_path] = value
106
+ return contexts
107
+
84
108
  def _format_context_path_for_writing(
85
109
  self,
86
110
  path_to_write: str,
87
- cwd: str,
111
+ relative_to_dir: str,
88
112
  ) -> str:
89
113
  """Formats a path for writing into a context file key."""
90
114
  home_dir = os.path.expanduser("~")
91
- abs_path_to_write = os.path.abspath(os.path.join(cwd, path_to_write))
92
- abs_cwd = os.path.abspath(cwd)
93
- # Rule 1: Inside CWD
94
- if abs_path_to_write.startswith(abs_cwd):
95
- if abs_path_to_write == abs_cwd:
115
+ abs_path_to_write = os.path.abspath(
116
+ os.path.join(relative_to_dir, path_to_write)
117
+ )
118
+ abs_relative_to_dir = os.path.abspath(relative_to_dir)
119
+ # Rule 1: Inside relative_to_dir
120
+ if abs_path_to_write.startswith(abs_relative_to_dir):
121
+ if abs_path_to_write == abs_relative_to_dir:
96
122
  return "."
97
- return os.path.relpath(abs_path_to_write, abs_cwd)
123
+ return os.path.relpath(abs_path_to_write, abs_relative_to_dir)
98
124
  # Rule 2: Inside Home
99
125
  if abs_path_to_write.startswith(home_dir):
100
126
  if abs_path_to_write == home_dir:
@@ -103,56 +129,47 @@ class LLMContextConfig:
103
129
  # Rule 3: Absolute
104
130
  return abs_path_to_write
105
131
 
106
- def write_context(
107
- self,
108
- content: str,
109
- context_path: str | None = None,
110
- cwd: str | None = None,
111
- ):
112
- """Writes content to a context block in CWD's configuration file."""
113
- if cwd is None:
114
- cwd = os.getcwd()
115
- if context_path is None:
116
- context_path = cwd
117
-
118
- config_file = os.path.join(cwd, CFG.LLM_CONTEXT_FILE)
119
-
120
- sections = {}
121
- if os.path.exists(config_file):
122
- sections = self._parse_config(config_file)
123
-
124
- abs_context_path = os.path.abspath(os.path.join(cwd, context_path))
125
-
126
- found_key = None
127
- for key in sections.keys():
128
- if not key.startswith("Context:"):
129
- continue
130
- context_path_str = key[len("Context:") :].strip()
131
- abs_key_path = self._normalize_context_path(
132
- context_path_str,
133
- os.path.dirname(config_file),
134
- )
135
- if abs_key_path == abs_context_path:
136
- found_key = key
132
+ def _find_config_files(self, cwd: str) -> list[str]:
133
+ configs = []
134
+ current_dir = cwd
135
+ home_dir = os.path.expanduser("~")
136
+ while True:
137
+ config_path = os.path.join(current_dir, CFG.LLM_CONTEXT_FILE)
138
+ if os.path.exists(config_path):
139
+ configs.append(config_path)
140
+ if current_dir == home_dir or current_dir == "/":
137
141
  break
142
+ current_dir = os.path.dirname(current_dir)
143
+ return configs
138
144
 
139
- if found_key:
140
- sections[found_key] = content
141
- else:
142
- formatted_path = self._format_context_path_for_writing(
143
- context_path,
144
- cwd,
145
- )
146
- new_key = f"Context: {formatted_path}"
147
- sections[new_key] = content
145
+ def _get_home_config_file(self) -> str:
146
+ home_dir = os.path.expanduser("~")
147
+ return os.path.join(home_dir, CFG.LLM_CONTEXT_FILE)
148
148
 
149
- # Serialize back to markdown
150
- new_file_content = ""
151
- for key, value in sections.items():
152
- new_file_content += f"# {key}\n{demote_markdown_headers(value)}\n\n"
149
+ def _parse_config(self, file_path: str) -> dict[str, str]:
150
+ with open(file_path, "r") as f:
151
+ content = f.read()
152
+ return markdown_to_dict(content)
153
153
 
154
- with open(config_file, "w") as f:
155
- f.write(new_file_content)
154
+ def _get_all_sections(self, cwd: str) -> list[tuple[str, dict[str, str]]]:
155
+ config_files = self._find_config_files(cwd)
156
+ all_sections = []
157
+ for config_file in config_files:
158
+ config_dir = os.path.dirname(config_file)
159
+ sections = self._parse_config(config_file)
160
+ all_sections.append((config_dir, sections))
161
+ return all_sections
162
+
163
+ def _normalize_context_path(
164
+ self,
165
+ path_str: str,
166
+ relative_to_dir: str,
167
+ ) -> str:
168
+ """Normalizes a context path string to an absolute path."""
169
+ expanded_path = os.path.expanduser(path_str)
170
+ if os.path.isabs(expanded_path):
171
+ return os.path.abspath(expanded_path)
172
+ return os.path.abspath(os.path.join(relative_to_dir, expanded_path))
156
173
 
157
174
 
158
175
  llm_context_config = LLMContextConfig()
@@ -8,21 +8,17 @@ def markdown_to_dict(markdown: str) -> dict[str, str]:
8
8
  current_title = ""
9
9
  current_content: list[str] = []
10
10
  fence_stack: list[str] = []
11
-
12
11
  fence_pattern = re.compile(r"^([`~]{3,})(.*)$")
13
12
  h1_pattern = re.compile(r"^# (.+)$")
14
-
15
13
  for line in markdown.splitlines():
16
14
  # Detect code fence open/close
17
15
  fence_match = fence_pattern.match(line.strip())
18
-
19
16
  if fence_match:
20
17
  fence = fence_match.group(1)
21
18
  if fence_stack and fence_stack[-1] == fence:
22
19
  fence_stack.pop() # close current fence
23
20
  else:
24
21
  fence_stack.append(fence) # open new fence
25
-
26
22
  # Only parse H1 when not inside a code fence
27
23
  if not fence_stack:
28
24
  h1_match = h1_pattern.match(line)
@@ -34,9 +30,7 @@ def markdown_to_dict(markdown: str) -> dict[str, str]:
34
30
  current_title = h1_match.group(1).strip()
35
31
  current_content = []
36
32
  continue
37
-
38
33
  current_content.append(line)
39
-
40
34
  # Save final section
41
35
  if current_title:
42
36
  sections[current_title] = "\n".join(current_content).strip()
zrb/group/any_group.py CHANGED
@@ -1,9 +1,13 @@
1
1
  from abc import ABC, abstractmethod
2
+ from typing import Generic, TypeVar
2
3
 
3
4
  from zrb.task.any_task import AnyTask
4
5
 
6
+ GroupType = TypeVar("GroupType", bound="AnyGroup")
7
+ TaskType = TypeVar("TaskType", bound=AnyTask)
5
8
 
6
- class AnyGroup(ABC):
9
+
10
+ class AnyGroup(ABC, Generic[GroupType, TaskType]):
7
11
  @property
8
12
  @abstractmethod
9
13
  def name(self) -> str:
@@ -35,11 +39,11 @@ class AnyGroup(ABC):
35
39
  pass
36
40
 
37
41
  @abstractmethod
38
- def add_group(self, group: "AnyGroup | str") -> "AnyGroup":
42
+ def add_group(self, group: GroupType, alias: str | None) -> GroupType:
39
43
  pass
40
44
 
41
45
  @abstractmethod
42
- def add_task(self, task: AnyTask, alias: str | None = None) -> AnyTask:
46
+ def add_task(self, task: TaskType, alias: str | None = None) -> TaskType:
43
47
  pass
44
48
 
45
49
  @abstractmethod
@@ -55,5 +59,5 @@ class AnyGroup(ABC):
55
59
  pass
56
60
 
57
61
  @abstractmethod
58
- def get_group_by_alias(self, name: str) -> "AnyGroup | None":
62
+ def get_group_by_alias(self, alias: str) -> "AnyGroup | None":
59
63
  pass
zrb/group/group.py CHANGED
@@ -1,8 +1,10 @@
1
- from zrb.group.any_group import AnyGroup
1
+ from typing import Generic, TypeVar
2
+
3
+ from zrb.group.any_group import AnyGroup, GroupType, TaskType
2
4
  from zrb.task.any_task import AnyTask
3
5
 
4
6
 
5
- class Group(AnyGroup):
7
+ class Group(AnyGroup, Generic[GroupType, TaskType]):
6
8
  def __init__(
7
9
  self, name: str, description: str | None = None, banner: str | None = None
8
10
  ):
@@ -41,13 +43,13 @@ class Group(AnyGroup):
41
43
  alias.sort()
42
44
  return {name: self._tasks.get(name) for name in alias}
43
45
 
44
- def add_group(self, group: AnyGroup | str, alias: str | None = None) -> AnyGroup:
45
- real_group = Group(group) if isinstance(group, str) else group
46
+ def add_group(self, group: GroupType, alias: str | None = None) -> GroupType:
47
+ real_group: GroupType = Group(group) if isinstance(group, str) else group
46
48
  alias = alias if alias is not None else real_group.name
47
49
  self._groups[alias] = real_group
48
50
  return real_group
49
51
 
50
- def add_task(self, task: AnyTask, alias: str | None = None) -> AnyTask:
52
+ def add_task(self, task: TaskType, alias: str | None = None) -> TaskType:
51
53
  alias = alias if alias is not None else task.name
52
54
  self._tasks[alias] = task
53
55
  return task
zrb/input/text_input.py CHANGED
@@ -1,13 +1,9 @@
1
- import os
2
- import subprocess
3
- import tempfile
4
1
  from collections.abc import Callable
5
2
 
6
3
  from zrb.config.config import CFG
7
4
  from zrb.context.any_shared_context import AnySharedContext
8
5
  from zrb.input.base_input import BaseInput
9
6
  from zrb.util.cli.text import edit_text
10
- from zrb.util.file import read_file
11
7
 
12
8
 
13
9
  class TextInput(BaseInput):