@d5render/cli 0.1.56 → 0.1.60

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,10 +1,38 @@
1
1
  ---
2
2
  name: code-review
3
- description: When the task is code-review, please follow this document to proceed the work.
3
+ description: code-review task process.
4
+ disable-model-invocation: false
4
5
  ---
5
6
 
6
- 1. read file [code-review.instructions.md](./code-review.instructions.md).
7
- 2. find all Markdown resumes within the project and their basic understanding of the project.
8
- 3. integrate your understanding of code-review to process task.
9
- 4. the merge request is only for change reference and is not the main objective of the review; therefore, it does not require in-depth analysis.
10
- 5. pay attention to the issue tracking description in the context, and try to use relevant tools to obtain more context.
7
+ For all review-related tasks, the subagent and main agent can confirm with each other without user confirmation; they can be executed directly. However, it is important to note that all executed tasks must be listed at the end and only main agent can execute report-related tasks.
8
+
9
+ - Obtain as much context as possible and return a summary, including but not limited to the following:
10
+ - Use tools to locate files related to the changes. However, commits that explicitly request a merge, such as those with the title "Merge", do not require processing.
11
+ - Use tools to construct a data flow graph of the changes, ensure the relevant graphs are as complete as possible.
12
+ - **You MUST use LSP tools** to build a complete code relationship graph. Follow the steps in [LSP.md](./lsp.md) strictly.
13
+ - Other tools mentioned in related documents
14
+ - Then, launch parallel agents to independently code review the change. The agents should do the following, then return a list of issues and the reason each issue was flagged (eg. .md adherence, bug, historical git context, etc.):
15
+ 1. Agent #1: Audit the changes to make sure they compily with the .md.
16
+ - Read README.md, AGENTS.md, .github/**.md, .cursor/**.md files in the **same directory** as the changes and the **root directory**.
17
+ - Determining whether documents contain related content requires additional loading and returning to the main agent.
18
+ - Check [severity.instructions.md](./severity.instructions.md) to clarify error severity levels, Please follow the project requirements regarding the redefinition of error levels in the relevant documentation.
19
+ 2. Agent #2: Read the file changes to obvious bugs
20
+ 3. Agent #3:
21
+ - Read the git blame and history of the code modified, to identify any bugs in light of that historical context
22
+ - Read code comments in the modified files, and make sure the changes in the pull request comply with any guidance in the comments.
23
+ - Read previous pull requests that touched these files, and check for any comments on those pull requests that may also apply to the current pull request.
24
+ - Consider other related aspects of the current function and provide corresponding suggestions.
25
+ - Parallel subagents have the following additions.
26
+ - The subagents **should not** perform any report-related tasks.
27
+ - For each issue found returns a score to indicate the agent's level of confidence for whether the issue is real or false positive. To do that, the agent should score each issue on a scale from 0-100, indicating its level of confidence. For issues that were flagged due to CLAUDE.md instructions, the agent should double check that the CLAUDE.md actually calls out that issue specifically. The scale is (give this rubric to the agent verbatim):
28
+ - 0: Not confident at all. This is a false positive that doesn't stand up to light scrutiny, or is a pre-existing issue.
29
+ - 25: Somewhat confident. This might be a real issue, but may also be a false positive. The agent wasn't able to verify that it's a real issue. If the issue is stylistic, it is one that was not explicitly called out in the relevant CLAUDE.md.
30
+ - 50: Moderately confident. The agent was able to verify this is a real issue, but it might be a nitpick or not happen very often in practice. Relative to the rest of the PR, it's not very important.
31
+ - 75: Highly confident. The agent double checked the issue, and verified that it is very likely it is a real issue that will be hit in practice. The existing approach in the PR is insufficient. The issue is very important and will directly impact the code's functionality, or it is an issue that is directly mentioned in the relevant CLAUDE.md.
32
+ - 100: Absolutely certain. The agent double checked the issue, and confirmed that it is definitely a real issue, that will happen frequently in practice. The evidence directly confirms this.
33
+ - List the task plan and output the task completion status.
34
+ - Then, main agent performs the following actions
35
+ - First, please evaluate codes self. Then, compare and organize the issue scores in self and subagent, refine the context of projects scoring below 60, and re-perform the review.
36
+ - The final score is the average of the three scores.
37
+ - List all the task status
38
+ - Executes the report-related processes, must use chinese(中文) to report