bioguider 0.2.19__py3-none-any.whl → 0.2.21__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of bioguider might be problematic. Click here for more details.
- bioguider/agents/agent_utils.py +18 -10
- bioguider/agents/collection_execute_step.py +1 -1
- bioguider/agents/collection_observe_step.py +7 -2
- bioguider/agents/collection_task_utils.py +1 -0
- bioguider/agents/common_conversation.py +20 -2
- bioguider/agents/consistency_collection_step.py +100 -0
- bioguider/agents/consistency_evaluation_task.py +56 -0
- bioguider/agents/consistency_evaluation_task_utils.py +13 -0
- bioguider/agents/consistency_observe_step.py +107 -0
- bioguider/agents/consistency_query_step.py +74 -0
- bioguider/agents/evaluation_task.py +2 -2
- bioguider/agents/evaluation_userguide_prompts.py +162 -0
- bioguider/agents/evaluation_userguide_task.py +131 -0
- bioguider/agents/prompt_utils.py +15 -8
- bioguider/database/code_structure_db.py +489 -0
- bioguider/generation/__init__.py +39 -0
- bioguider/generation/change_planner.py +140 -0
- bioguider/generation/document_renderer.py +47 -0
- bioguider/generation/llm_cleaner.py +43 -0
- bioguider/generation/llm_content_generator.py +69 -0
- bioguider/generation/llm_injector.py +270 -0
- bioguider/generation/models.py +77 -0
- bioguider/generation/output_manager.py +54 -0
- bioguider/generation/repo_reader.py +37 -0
- bioguider/generation/report_loader.py +151 -0
- bioguider/generation/style_analyzer.py +36 -0
- bioguider/generation/suggestion_extractor.py +136 -0
- bioguider/generation/test_metrics.py +104 -0
- bioguider/managers/evaluation_manager.py +24 -0
- bioguider/managers/generation_manager.py +160 -0
- bioguider/managers/generation_test_manager.py +74 -0
- bioguider/utils/code_structure_builder.py +47 -0
- bioguider/utils/constants.py +12 -12
- bioguider/utils/python_file_handler.py +65 -0
- bioguider/utils/r_file_handler.py +368 -0
- bioguider/utils/utils.py +34 -1
- {bioguider-0.2.19.dist-info → bioguider-0.2.21.dist-info}/METADATA +1 -1
- bioguider-0.2.21.dist-info/RECORD +77 -0
- bioguider-0.2.19.dist-info/RECORD +0 -51
- {bioguider-0.2.19.dist-info → bioguider-0.2.21.dist-info}/LICENSE +0 -0
- {bioguider-0.2.19.dist-info → bioguider-0.2.21.dist-info}/WHEEL +0 -0
|
@@ -0,0 +1,162 @@
|
|
|
1
|
+
|
|
2
|
+
INDIVIDUAL_USERGUIDE_EVALUATION_SYSTEM_PROMPT = """
|
|
3
|
+
You are an expert in evaluating the quality of user guide in software repositories.
|
|
4
|
+
Your task is to analyze the provided files related to user guide and generate a structured quality assessment based on the following criteria.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
### **Evaluation Criteria**
|
|
8
|
+
|
|
9
|
+
1. **Readability**:
|
|
10
|
+
* **Flesch Reading Ease**: `{flesch_reading_ease}` (A higher score is better, with 60-70 being easily understood by most adults).
|
|
11
|
+
* **Flesch-Kincaid Grade Level**: `{flesch_kincaid_grade}` (Represents the US school-grade level needed to understand the text).
|
|
12
|
+
* **Gunning Fog Index**: `{gunning_fog_index}` (A score above 12 is generally considered too hard for most people).
|
|
13
|
+
* **SMOG Index**: `{smog_index}` (Estimates the years of education needed to understand the text).
|
|
14
|
+
* **Assessment**: Based on these scores, evaluate the overall readability and technical complexity of the language used.
|
|
15
|
+
|
|
16
|
+
2. **Arguments and Clarity**:
|
|
17
|
+
* **Assessment**: [Your evaluation of whether it provides a clear **description** of arguments and their usage]
|
|
18
|
+
* **Improvement Suggestions**:
|
|
19
|
+
* **Original text:** [Quote a specific line/section from the user guide.]
|
|
20
|
+
* **Improving comments:** [Provide your suggestions to improve clarity.]
|
|
21
|
+
|
|
22
|
+
3. **Return Value and Clarity**:
|
|
23
|
+
* **Assessment**: [Your evaluation of whether it provides a clear **description** of return value and its meaning]
|
|
24
|
+
* **Improvement Suggestions**:
|
|
25
|
+
* **Original text:** [Quote a specific line/section from the user guide.]
|
|
26
|
+
* **Improving comments:** [Provide your suggestions to improve clarity.]
|
|
27
|
+
|
|
28
|
+
4. **Context and Purpose**:
|
|
29
|
+
* **Assessment**: [Your evaluation of whether it provides a clear **description** of the context and purpose of the module]
|
|
30
|
+
* **Improvement Suggestions**:
|
|
31
|
+
* **Original text:** [Quote a specific line/section from the user guide.]
|
|
32
|
+
* **Improving comments:** [Provide your suggestions to improve clarity.]
|
|
33
|
+
|
|
34
|
+
5. **Error Handling**:
|
|
35
|
+
* **Assessment**: [Your evaluation of whether it provides a clear **description** of error handling]
|
|
36
|
+
* **Improvement Suggestions**:
|
|
37
|
+
* **Original text:** [Quote a specific line/section from the user guide.]
|
|
38
|
+
* **Improving comments:** [Provide your suggestions to improve clarity.]
|
|
39
|
+
|
|
40
|
+
6. **Usage Examples**:
|
|
41
|
+
* **Assessment**: [Your evaluation of whether it provides a clear **description** of usage examples]
|
|
42
|
+
* **Improvement Suggestions**:
|
|
43
|
+
* **Original text:** [Quote a specific line/section from the user guide.]
|
|
44
|
+
* **Improving comments:** [Provide your suggestions to improve clarity.]
|
|
45
|
+
|
|
46
|
+
7. **Overall Score**: Give an overall quality rating of the User Guide information.
|
|
47
|
+
* Output: `Poor`, `Fair`, `Good`, or `Excellent`
|
|
48
|
+
|
|
49
|
+
---
|
|
50
|
+
|
|
51
|
+
### **Final Report Ouput**
|
|
52
|
+
Your final report must **exactly match** the following format. Do not add or omit any sections.
|
|
53
|
+
|
|
54
|
+
**FinalAnswer**
|
|
55
|
+
* **Overall Score:** [Poor / Fair / Good / Excellent]
|
|
56
|
+
* **Overall Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
57
|
+
* **Overall Improvement Suggestions:**
|
|
58
|
+
- "Original text snippet 1" - Improving comment 1
|
|
59
|
+
- "Original text snippet 2" - Improving comment 2
|
|
60
|
+
- ...
|
|
61
|
+
* **Readability Analysis Score:** [Poor / Fair / Good / Excellent]
|
|
62
|
+
* **Readability Analysis Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
63
|
+
* **Readability Analysis Improvement Suggestions:**
|
|
64
|
+
- "Original text snippet 1" - Improving comment 1
|
|
65
|
+
- "Original text snippet 2" - Improving comment 2
|
|
66
|
+
- ...
|
|
67
|
+
* **Arguments and Clarity Score:** [Poor / Fair / Good / Excellent]
|
|
68
|
+
* **Arguments and Clarity Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
69
|
+
* **Arguments and Clarity Improvement Suggestions:**
|
|
70
|
+
- "Original text snippet 1" - Improving comment 1
|
|
71
|
+
- "Original text snippet 2" - Improving comment 2
|
|
72
|
+
- ...
|
|
73
|
+
* **Return Value and Clarity Score:** [Poor / Fair / Good / Excellent]
|
|
74
|
+
* **Return Value and Clarity Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
75
|
+
* **Return Value and Clarity Improvement Suggestions:**
|
|
76
|
+
- "Original text snippet 1" - Improving comment 1
|
|
77
|
+
- "Original text snippet 2" - Improving comment 2
|
|
78
|
+
- ...
|
|
79
|
+
* **Context and Purpose Score:** [Poor / Fair / Good / Excellent]
|
|
80
|
+
* **Context and Purpose Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
81
|
+
* **Context and Purpose Improvement Suggestions:**
|
|
82
|
+
- "Original text snippet 1" - Improving comment 1
|
|
83
|
+
- "Original text snippet 2" - Improving comment 2
|
|
84
|
+
- ...
|
|
85
|
+
* **Error Handling Score:** [Poor / Fair / Good / Excellent]
|
|
86
|
+
* **Error Handling Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
87
|
+
* **Error Handling Improvement Suggestions:**
|
|
88
|
+
- "Original text snippet 1" - Improving comment 1
|
|
89
|
+
- "Original text snippet 2" - Improving comment 2
|
|
90
|
+
- ...
|
|
91
|
+
* **Usage Examples Score:** [Poor / Fair / Good / Excellent]
|
|
92
|
+
* **Usage Examples Key Strengths**: <brief summary of the User Guide's strongest points in 2-3 sentences>
|
|
93
|
+
* **Usage Examples Improvement Suggestions:**
|
|
94
|
+
- "Original text snippet 1" - Improving comment 1
|
|
95
|
+
- "Original text snippet 2" - Improving comment 2
|
|
96
|
+
- ...
|
|
97
|
+
...
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
### **User Guide Content:**
|
|
102
|
+
{userguide_content}
|
|
103
|
+
|
|
104
|
+
---
|
|
105
|
+
|
|
106
|
+
"""
|
|
107
|
+
|
|
108
|
+
CONSISTENCY_EVAL_SYSTEM_PROMPT = """
|
|
109
|
+
You are an expert in evaluating the consistency of user guide in software repositories.
|
|
110
|
+
Your task is to analyze both:
|
|
111
|
+
1. the provided file related to user guide/API documentation,
|
|
112
|
+
2. the code definitions related to the user guide/API documentation
|
|
113
|
+
and generate a structured consistency assessment based on the following criteria.
|
|
114
|
+
|
|
115
|
+
---
|
|
116
|
+
|
|
117
|
+
### **Evaluation Criteria**
|
|
118
|
+
|
|
119
|
+
**Consistency**:
|
|
120
|
+
* **Score**: [Poor / Fair / Good / Excellent]
|
|
121
|
+
* **Assessment**: [Your evaluation of whether the user guide/API documentation is consistent with the code definitions]
|
|
122
|
+
* **Development**: [A list of inconsistent function/class/method name and inconsistent docstring]
|
|
123
|
+
* **Strengths**: [A list of strengths of the user guide/API documentation on consistency]
|
|
124
|
+
|
|
125
|
+
### **Output Format**
|
|
126
|
+
Your output **must exactly match** the following format:
|
|
127
|
+
```
|
|
128
|
+
**Consistency**:
|
|
129
|
+
* **Score**: [Poor / Fair / Good / Excellent]
|
|
130
|
+
* **Assessment**: [Your evaluation of whether the user guide/API documentation is consistent with the code definitions]
|
|
131
|
+
* **Development**: [A list of inconsistent function/class/method name and inconsistent docstring]
|
|
132
|
+
* **Strengths**: [A list of strengths of the user guide/API documentation on consistency]
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### **Output Example**
|
|
136
|
+
|
|
137
|
+
```
|
|
138
|
+
**Consistency**:
|
|
139
|
+
* **Assessment**: [Your evaluation of whether the user guide/API documentation is consistent with the code definitions]
|
|
140
|
+
* **Development**:
|
|
141
|
+
- Inconsistent function/class/method name 1
|
|
142
|
+
- Inconsistent docstring 1
|
|
143
|
+
- Inconsistent function/class/method name 2
|
|
144
|
+
- Inconsistent docstring 2
|
|
145
|
+
- ...
|
|
146
|
+
* **Strengths**:
|
|
147
|
+
- Strengths 1
|
|
148
|
+
- Strengths 2
|
|
149
|
+
- ...
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
---
|
|
153
|
+
|
|
154
|
+
### **Input User Guide/API Documentation**
|
|
155
|
+
{user_guide_api_documentation}
|
|
156
|
+
|
|
157
|
+
### **Code Definitions**
|
|
158
|
+
{code_definitions}
|
|
159
|
+
|
|
160
|
+
---
|
|
161
|
+
|
|
162
|
+
"""
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
|
|
2
|
+
from pathlib import Path
|
|
3
|
+
import logging
|
|
4
|
+
from langchain.prompts import ChatPromptTemplate
|
|
5
|
+
from pydantic import BaseModel, Field
|
|
6
|
+
|
|
7
|
+
from bioguider.agents.agent_utils import read_file
|
|
8
|
+
from bioguider.agents.collection_task import CollectionTask
|
|
9
|
+
from bioguider.agents.common_agent_2step import CommonAgentTwoSteps
|
|
10
|
+
from bioguider.agents.consistency_evaluation_task import ConsistencyEvaluationTask, ConsistencyEvaluationResult
|
|
11
|
+
from bioguider.agents.prompt_utils import CollectionGoalItemEnum
|
|
12
|
+
from bioguider.utils.constants import (
|
|
13
|
+
DEFAULT_TOKEN_USAGE,
|
|
14
|
+
)
|
|
15
|
+
from ..utils.pyphen_utils import PyphenReadability
|
|
16
|
+
|
|
17
|
+
from .evaluation_task import EvaluationTask
|
|
18
|
+
from .agent_utils import read_file
|
|
19
|
+
from bioguider.utils.utils import increase_token_usage
|
|
20
|
+
from .evaluation_userguide_prompts import INDIVIDUAL_USERGUIDE_EVALUATION_SYSTEM_PROMPT
|
|
21
|
+
|
|
22
|
+
|
|
23
|
+
class UserGuideEvaluationResult(BaseModel):
|
|
24
|
+
overall_score: str=Field(description="A string value, could be `Poor`, `Fair`, `Good`, or `Excellent`")
|
|
25
|
+
overall_key_strengths: str=Field(description="A string value, the key strengths of the user guide")
|
|
26
|
+
overall_improvement_suggestions: str=Field(description="Suggestions to improve the overall score if necessary")
|
|
27
|
+
readability_score: str=Field(description="A string value, could be `Poor`, `Fair`, `Good`, or `Excellent`")
|
|
28
|
+
readability_suggestions: str=Field(description="Suggestions to improve readability if necessary")
|
|
29
|
+
context_and_purpose_score: str=Field(description="A string value, could be `Poor`, `Fair`, `Good`, or `Excellent`")
|
|
30
|
+
context_and_purpose_suggestions: str=Field(description="Suggestions to improve context and purpose if necessary")
|
|
31
|
+
error_handling_score: str=Field(description="A string value, could be `Poor`, `Fair`, `Good`, or `Excellent`")
|
|
32
|
+
error_handling_suggestions: str=Field(description="Suggestions to improve error handling if necessary")
|
|
33
|
+
|
|
34
|
+
class IndividualUserGuideEvaluationResult(BaseModel):
|
|
35
|
+
user_guide_evaluation: UserGuideEvaluationResult | None=Field(description="The evaluation result of the user guide")
|
|
36
|
+
consistency_evaluation: ConsistencyEvaluationResult | None=Field(description="The evaluation result of the consistency of the user guide")
|
|
37
|
+
|
|
38
|
+
logger = logging.getLogger(__name__)
|
|
39
|
+
|
|
40
|
+
class EvaluationUserGuideTask(EvaluationTask):
|
|
41
|
+
def __init__(
|
|
42
|
+
self,
|
|
43
|
+
llm,
|
|
44
|
+
repo_path,
|
|
45
|
+
gitignore_path,
|
|
46
|
+
meta_data = None,
|
|
47
|
+
step_callback = None,
|
|
48
|
+
summarized_files_db = None,
|
|
49
|
+
code_structure_db = None,
|
|
50
|
+
):
|
|
51
|
+
super().__init__(llm, repo_path, gitignore_path, meta_data, step_callback, summarized_files_db)
|
|
52
|
+
self.evaluation_name = "User Guide Evaluation"
|
|
53
|
+
self.code_structure_db = code_structure_db
|
|
54
|
+
|
|
55
|
+
def _collect_files(self):
|
|
56
|
+
task = CollectionTask(
|
|
57
|
+
llm=self.llm,
|
|
58
|
+
step_callback=self.step_callback,
|
|
59
|
+
summarized_files_db=self.summarized_files_db,
|
|
60
|
+
)
|
|
61
|
+
task.compile(
|
|
62
|
+
repo_path=self.repo_path,
|
|
63
|
+
gitignore_path=Path(self.repo_path, ".gitignore"),
|
|
64
|
+
goal_item=CollectionGoalItemEnum.UserGuide.name,
|
|
65
|
+
)
|
|
66
|
+
files = task.collect()
|
|
67
|
+
return files
|
|
68
|
+
|
|
69
|
+
def _evaluate_consistency(self, file: str) -> ConsistencyEvaluationResult:
|
|
70
|
+
consistency_evaluation_task = ConsistencyEvaluationTask(
|
|
71
|
+
llm=self.llm,
|
|
72
|
+
code_structure_db=self.code_structure_db,
|
|
73
|
+
step_callback=self.step_callback,
|
|
74
|
+
)
|
|
75
|
+
file = file.strip()
|
|
76
|
+
with open(Path(self.repo_path, file), "r") as f:
|
|
77
|
+
user_guide_api_documentation = f.read()
|
|
78
|
+
return consistency_evaluation_task.evaluate(user_guide_api_documentation), {**DEFAULT_TOKEN_USAGE}
|
|
79
|
+
|
|
80
|
+
def _evaluate_individual_userguide(self, file: str) -> tuple[IndividualUserGuideEvaluationResult | None, dict]:
|
|
81
|
+
content = read_file(Path(self.repo_path, file))
|
|
82
|
+
|
|
83
|
+
if content is None:
|
|
84
|
+
logger.error(f"Error in reading file {file}")
|
|
85
|
+
return None, {**DEFAULT_TOKEN_USAGE}
|
|
86
|
+
|
|
87
|
+
readability = PyphenReadability()
|
|
88
|
+
flesch_reading_ease, flesch_kincaid_grade, gunning_fog_index, smog_index, \
|
|
89
|
+
_, _, _, _, _ = readability.readability_metrics(content)
|
|
90
|
+
system_prompt = ChatPromptTemplate.from_template(
|
|
91
|
+
INDIVIDUAL_USERGUIDE_EVALUATION_SYSTEM_PROMPT
|
|
92
|
+
).format(
|
|
93
|
+
flesch_reading_ease=flesch_reading_ease,
|
|
94
|
+
flesch_kincaid_grade=flesch_kincaid_grade,
|
|
95
|
+
gunning_fog_index=gunning_fog_index,
|
|
96
|
+
smog_index=smog_index,
|
|
97
|
+
userguide_content=content,
|
|
98
|
+
)
|
|
99
|
+
agent = CommonAgentTwoSteps(llm=self.llm)
|
|
100
|
+
res, _, token_usage, reasoning_process = agent.go(
|
|
101
|
+
system_prompt=system_prompt,
|
|
102
|
+
instruction_prompt="Now, let's begin the user guide/API documentation evaluation.",
|
|
103
|
+
schema=UserGuideEvaluationResult,
|
|
104
|
+
)
|
|
105
|
+
res: UserGuideEvaluationResult = res
|
|
106
|
+
|
|
107
|
+
consistency_evaluation_result, _temp_token_usage = self._evaluate_consistency(file)
|
|
108
|
+
if consistency_evaluation_result is None:
|
|
109
|
+
# No sufficient information to evaluate the consistency of the user guide/API documentation
|
|
110
|
+
consistency_evaluation_result = ConsistencyEvaluationResult(
|
|
111
|
+
consistency_score="N/A",
|
|
112
|
+
consistency_assessment="No sufficient information to evaluate the consistency of the user guide/API documentation",
|
|
113
|
+
consistency_development=[],
|
|
114
|
+
consistency_strengths=[],
|
|
115
|
+
)
|
|
116
|
+
return IndividualUserGuideEvaluationResult(
|
|
117
|
+
user_guide_evaluation=res,
|
|
118
|
+
consistency_evaluation=consistency_evaluation_result,
|
|
119
|
+
), token_usage
|
|
120
|
+
|
|
121
|
+
def _evaluate(self, files: list[str] | None = None) -> tuple[dict[str, IndividualUserGuideEvaluationResult] | None, dict, list[str]]:
|
|
122
|
+
total_token_usage = {**DEFAULT_TOKEN_USAGE}
|
|
123
|
+
user_guide_evaluation_results = {}
|
|
124
|
+
for file in files:
|
|
125
|
+
if file.endswith(".py") or file.endswith(".R"):
|
|
126
|
+
continue
|
|
127
|
+
user_guide_evaluation_result, token_usage = self._evaluate_individual_userguide(file)
|
|
128
|
+
total_token_usage = increase_token_usage(total_token_usage, token_usage)
|
|
129
|
+
user_guide_evaluation_results[file] = user_guide_evaluation_result
|
|
130
|
+
|
|
131
|
+
return user_guide_evaluation_results, total_token_usage, files
|
bioguider/agents/prompt_utils.py
CHANGED
|
@@ -104,19 +104,26 @@ COLLECTION_PROMPTS = {
|
|
|
104
104
|
"goal_item": "User Guide",
|
|
105
105
|
"related_file_description": """A document qualifies as a **User Guide** if it includes **at least one** of the following elements.
|
|
106
106
|
If **any one** of these is present, the document should be classified as a User Guide — full coverage is **not required**:
|
|
107
|
-
-
|
|
108
|
-
-
|
|
109
|
-
-
|
|
110
|
-
-
|
|
111
|
-
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
107
|
+
- **Not source code or a script** (*.py, *.R) or notebook (*.ipynb, *.Rmd) that is not intended for end-user interaction.
|
|
108
|
+
- Document **functions, methods, or classes**
|
|
109
|
+
- Describe **input parameters, return values**, and **usage syntax**
|
|
110
|
+
- Include **technical guidance** for using specific APIs
|
|
111
|
+
- Are often found in folders such as
|
|
112
|
+
* `man/` (for `.Rd` files in R)
|
|
113
|
+
* `docs/reference/`, `docs/api/`, `docs/dev/` (for Python) or similar
|
|
114
|
+
* Standalone files with names like `api.md`, `reference.md`, `user_guide.md`
|
|
115
|
+
**Do not** classify the document as a User Guide if it primarily serves as a Tutorial or Example. Such documents typically include:
|
|
115
116
|
- Sample Datasets: Example data used to illustrate functionality.
|
|
116
117
|
- Narrative Explanations: Story-like descriptions guiding the user through examples.
|
|
117
118
|
- Code Walkthroughs: Detailed explanations of code snippets in a tutorial format.
|
|
118
119
|
**Do not** classify the document as a User Guide if it is souce code or a script (*.py, *.R) that is not intended for end-user interaction.
|
|
119
120
|
- You can include directory names if all files in the directory are relevant to the goal item.""",
|
|
121
|
+
"plan_important_instructions": """ - **Do not** try to summarize or read the content of any source code or script (*.py, *.R) or notebook (*.ipynb, *.Rmd) that is not intended for end-user interaction.
|
|
122
|
+
- **Do not** classify the document as a User Guide if it is source code or a script (*.py, *.R) that is not intended for end-user interaction.
|
|
123
|
+
- **Do not** classify the document as a User Guide if it is a notebook (*.ipynb, *.Rmd) that is not intended for end-user interaction.
|
|
124
|
+
- You plan **must not** include any source code or script (*.py, *.R) or notebook (*.ipynb, *.Rmd) that is not intended for end-user interaction.""",
|
|
125
|
+
"observe_important_instructions": """ - **Do not** classify the document as a User Guide if it is source code or a script (*.py, *.R) that is not intended for end-user interaction.
|
|
126
|
+
- **Do not** include any source code or script (*.py, *.R) or notebook (*.ipynb, *.Rmd) in the final answer that is not intended for end-user interaction."""
|
|
120
127
|
},
|
|
121
128
|
"Tutorial": {
|
|
122
129
|
"goal_item": "Tutorials & Vignettes",
|