xai-review 0.26.0__py3-none-any.whl → 0.28.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of xai-review might be problematic. Click here for more details.

Files changed (164) hide show
  1. ai_review/cli/commands/run_inline_reply_review.py +7 -0
  2. ai_review/cli/commands/run_summary_reply_review.py +7 -0
  3. ai_review/cli/main.py +17 -0
  4. ai_review/clients/bitbucket/pr/client.py +45 -8
  5. ai_review/clients/bitbucket/pr/schema/comments.py +21 -2
  6. ai_review/clients/bitbucket/pr/schema/files.py +8 -3
  7. ai_review/clients/bitbucket/pr/schema/pull_request.py +1 -5
  8. ai_review/clients/bitbucket/pr/schema/user.py +7 -0
  9. ai_review/clients/bitbucket/tools.py +6 -0
  10. ai_review/clients/github/pr/client.py +98 -13
  11. ai_review/clients/github/pr/schema/comments.py +23 -1
  12. ai_review/clients/github/pr/schema/files.py +2 -1
  13. ai_review/clients/github/pr/schema/pull_request.py +1 -4
  14. ai_review/clients/github/pr/schema/reviews.py +2 -1
  15. ai_review/clients/github/pr/schema/user.py +6 -0
  16. ai_review/clients/github/pr/types.py +11 -1
  17. ai_review/clients/github/tools.py +6 -0
  18. ai_review/clients/gitlab/mr/client.py +67 -7
  19. ai_review/clients/gitlab/mr/schema/changes.py +1 -5
  20. ai_review/clients/gitlab/mr/schema/discussions.py +19 -8
  21. ai_review/clients/gitlab/mr/schema/notes.py +5 -1
  22. ai_review/clients/gitlab/mr/schema/user.py +7 -0
  23. ai_review/clients/gitlab/mr/types.py +16 -7
  24. ai_review/clients/gitlab/tools.py +5 -0
  25. ai_review/libs/config/prompt.py +96 -64
  26. ai_review/libs/config/review.py +2 -0
  27. ai_review/libs/config/vcs/base.py +2 -0
  28. ai_review/libs/config/vcs/pagination.py +6 -0
  29. ai_review/libs/http/paginate.py +43 -0
  30. ai_review/libs/llm/output_json_parser.py +60 -0
  31. ai_review/prompts/default_inline_reply.md +10 -0
  32. ai_review/prompts/default_summary_reply.md +14 -0
  33. ai_review/prompts/default_system_inline_reply.md +31 -0
  34. ai_review/prompts/default_system_summary_reply.md +13 -0
  35. ai_review/services/artifacts/schema.py +2 -2
  36. ai_review/services/hook/constants.py +14 -0
  37. ai_review/services/hook/service.py +95 -4
  38. ai_review/services/hook/types.py +18 -2
  39. ai_review/services/prompt/adapter.py +1 -1
  40. ai_review/services/prompt/service.py +49 -3
  41. ai_review/services/prompt/tools.py +21 -0
  42. ai_review/services/prompt/types.py +23 -0
  43. ai_review/services/review/gateway/comment.py +45 -6
  44. ai_review/services/review/gateway/llm.py +2 -1
  45. ai_review/services/review/gateway/types.py +50 -0
  46. ai_review/services/review/internal/inline/service.py +40 -0
  47. ai_review/services/review/internal/inline/types.py +8 -0
  48. ai_review/services/review/internal/inline_reply/schema.py +23 -0
  49. ai_review/services/review/internal/inline_reply/service.py +20 -0
  50. ai_review/services/review/internal/inline_reply/types.py +8 -0
  51. ai_review/services/review/{policy → internal/policy}/service.py +2 -1
  52. ai_review/services/review/internal/policy/types.py +15 -0
  53. ai_review/services/review/{summary → internal/summary}/service.py +2 -2
  54. ai_review/services/review/{summary → internal/summary}/types.py +1 -1
  55. ai_review/services/review/internal/summary_reply/__init__.py +0 -0
  56. ai_review/services/review/internal/summary_reply/schema.py +8 -0
  57. ai_review/services/review/internal/summary_reply/service.py +15 -0
  58. ai_review/services/review/internal/summary_reply/types.py +8 -0
  59. ai_review/services/review/runner/__init__.py +0 -0
  60. ai_review/services/review/runner/context.py +72 -0
  61. ai_review/services/review/runner/inline.py +80 -0
  62. ai_review/services/review/runner/inline_reply.py +80 -0
  63. ai_review/services/review/runner/summary.py +71 -0
  64. ai_review/services/review/runner/summary_reply.py +79 -0
  65. ai_review/services/review/runner/types.py +6 -0
  66. ai_review/services/review/service.py +78 -110
  67. ai_review/services/vcs/bitbucket/adapter.py +24 -0
  68. ai_review/services/vcs/bitbucket/client.py +107 -42
  69. ai_review/services/vcs/github/adapter.py +35 -0
  70. ai_review/services/vcs/github/client.py +105 -44
  71. ai_review/services/vcs/gitlab/adapter.py +26 -0
  72. ai_review/services/vcs/gitlab/client.py +91 -38
  73. ai_review/services/vcs/types.py +34 -0
  74. ai_review/tests/fixtures/clients/bitbucket.py +2 -2
  75. ai_review/tests/fixtures/clients/github.py +35 -6
  76. ai_review/tests/fixtures/clients/gitlab.py +42 -3
  77. ai_review/tests/fixtures/libs/__init__.py +0 -0
  78. ai_review/tests/fixtures/libs/llm/__init__.py +0 -0
  79. ai_review/tests/fixtures/libs/llm/output_json_parser.py +13 -0
  80. ai_review/tests/fixtures/services/hook.py +8 -0
  81. ai_review/tests/fixtures/services/llm.py +8 -5
  82. ai_review/tests/fixtures/services/prompt.py +70 -0
  83. ai_review/tests/fixtures/services/review/base.py +41 -0
  84. ai_review/tests/fixtures/services/review/gateway/__init__.py +0 -0
  85. ai_review/tests/fixtures/services/review/gateway/comment.py +98 -0
  86. ai_review/tests/fixtures/services/review/gateway/llm.py +17 -0
  87. ai_review/tests/fixtures/services/review/internal/__init__.py +0 -0
  88. ai_review/tests/fixtures/services/review/{inline.py → internal/inline.py} +8 -6
  89. ai_review/tests/fixtures/services/review/internal/inline_reply.py +25 -0
  90. ai_review/tests/fixtures/services/review/internal/policy.py +28 -0
  91. ai_review/tests/fixtures/services/review/internal/summary.py +21 -0
  92. ai_review/tests/fixtures/services/review/internal/summary_reply.py +19 -0
  93. ai_review/tests/fixtures/services/review/runner/__init__.py +0 -0
  94. ai_review/tests/fixtures/services/review/runner/context.py +50 -0
  95. ai_review/tests/fixtures/services/review/runner/inline.py +50 -0
  96. ai_review/tests/fixtures/services/review/runner/inline_reply.py +50 -0
  97. ai_review/tests/fixtures/services/review/runner/summary.py +50 -0
  98. ai_review/tests/fixtures/services/review/runner/summary_reply.py +50 -0
  99. ai_review/tests/fixtures/services/vcs.py +23 -0
  100. ai_review/tests/suites/cli/__init__.py +0 -0
  101. ai_review/tests/suites/cli/test_main.py +54 -0
  102. ai_review/tests/suites/clients/bitbucket/__init__.py +0 -0
  103. ai_review/tests/suites/clients/bitbucket/test_client.py +14 -0
  104. ai_review/tests/suites/clients/bitbucket/test_tools.py +31 -0
  105. ai_review/tests/suites/clients/github/test_tools.py +31 -0
  106. ai_review/tests/suites/clients/gitlab/test_tools.py +26 -0
  107. ai_review/tests/suites/libs/config/test_prompt.py +108 -28
  108. ai_review/tests/suites/libs/http/__init__.py +0 -0
  109. ai_review/tests/suites/libs/http/test_paginate.py +95 -0
  110. ai_review/tests/suites/libs/llm/__init__.py +0 -0
  111. ai_review/tests/suites/libs/llm/test_output_json_parser.py +155 -0
  112. ai_review/tests/suites/services/hook/test_service.py +88 -4
  113. ai_review/tests/suites/services/prompt/test_adapter.py +3 -3
  114. ai_review/tests/suites/services/prompt/test_service.py +102 -58
  115. ai_review/tests/suites/services/prompt/test_tools.py +86 -1
  116. ai_review/tests/suites/services/review/gateway/__init__.py +0 -0
  117. ai_review/tests/suites/services/review/gateway/test_comment.py +253 -0
  118. ai_review/tests/suites/services/review/gateway/test_llm.py +82 -0
  119. ai_review/tests/suites/services/review/internal/__init__.py +0 -0
  120. ai_review/tests/suites/services/review/internal/inline/__init__.py +0 -0
  121. ai_review/tests/suites/services/review/{inline → internal/inline}/test_schema.py +1 -1
  122. ai_review/tests/suites/services/review/internal/inline/test_service.py +81 -0
  123. ai_review/tests/suites/services/review/internal/inline_reply/__init__.py +0 -0
  124. ai_review/tests/suites/services/review/internal/inline_reply/test_schema.py +57 -0
  125. ai_review/tests/suites/services/review/internal/inline_reply/test_service.py +72 -0
  126. ai_review/tests/suites/services/review/internal/policy/__init__.py +0 -0
  127. ai_review/tests/suites/services/review/{policy → internal/policy}/test_service.py +1 -1
  128. ai_review/tests/suites/services/review/internal/summary/__init__.py +0 -0
  129. ai_review/tests/suites/services/review/{summary → internal/summary}/test_schema.py +1 -1
  130. ai_review/tests/suites/services/review/{summary → internal/summary}/test_service.py +2 -2
  131. ai_review/tests/suites/services/review/internal/summary_reply/__init__.py +0 -0
  132. ai_review/tests/suites/services/review/internal/summary_reply/test_schema.py +19 -0
  133. ai_review/tests/suites/services/review/internal/summary_reply/test_service.py +21 -0
  134. ai_review/tests/suites/services/review/runner/__init__.py +0 -0
  135. ai_review/tests/suites/services/review/runner/test_context.py +89 -0
  136. ai_review/tests/suites/services/review/runner/test_inline.py +100 -0
  137. ai_review/tests/suites/services/review/runner/test_inline_reply.py +109 -0
  138. ai_review/tests/suites/services/review/runner/test_summary.py +87 -0
  139. ai_review/tests/suites/services/review/runner/test_summary_reply.py +97 -0
  140. ai_review/tests/suites/services/review/test_service.py +64 -97
  141. ai_review/tests/suites/services/vcs/bitbucket/test_adapter.py +109 -0
  142. ai_review/tests/suites/services/vcs/bitbucket/{test_service.py → test_client.py} +88 -1
  143. ai_review/tests/suites/services/vcs/github/test_adapter.py +162 -0
  144. ai_review/tests/suites/services/vcs/github/{test_service.py → test_client.py} +102 -2
  145. ai_review/tests/suites/services/vcs/gitlab/test_adapter.py +105 -0
  146. ai_review/tests/suites/services/vcs/gitlab/{test_service.py → test_client.py} +99 -1
  147. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/METADATA +8 -5
  148. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/RECORD +160 -75
  149. ai_review/services/review/inline/service.py +0 -54
  150. ai_review/services/review/inline/types.py +0 -11
  151. ai_review/tests/fixtures/services/review/summary.py +0 -19
  152. ai_review/tests/suites/services/review/inline/test_service.py +0 -107
  153. /ai_review/{services/review/inline → libs/llm}/__init__.py +0 -0
  154. /ai_review/services/review/{policy → internal}/__init__.py +0 -0
  155. /ai_review/services/review/{summary → internal/inline}/__init__.py +0 -0
  156. /ai_review/services/review/{inline → internal/inline}/schema.py +0 -0
  157. /ai_review/{tests/suites/services/review/inline → services/review/internal/inline_reply}/__init__.py +0 -0
  158. /ai_review/{tests/suites/services/review → services/review/internal}/policy/__init__.py +0 -0
  159. /ai_review/{tests/suites/services/review → services/review/internal}/summary/__init__.py +0 -0
  160. /ai_review/services/review/{summary → internal/summary}/schema.py +0 -0
  161. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/WHEEL +0 -0
  162. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/entry_points.txt +0 -0
  163. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/licenses/LICENSE +0 -0
  164. {xai_review-0.26.0.dist-info → xai_review-0.28.0.dist-info}/top_level.txt +0 -0
@@ -2,20 +2,27 @@ from httpx import Response, QueryParams
2
2
 
3
3
  from ai_review.clients.gitlab.mr.schema.changes import GitLabGetMRChangesResponseSchema
4
4
  from ai_review.clients.gitlab.mr.schema.discussions import (
5
+ GitLabDiscussionSchema,
5
6
  GitLabGetMRDiscussionsQuerySchema,
6
7
  GitLabGetMRDiscussionsResponseSchema,
7
8
  GitLabCreateMRDiscussionRequestSchema,
8
- GitLabCreateMRDiscussionResponseSchema
9
+ GitLabCreateMRDiscussionResponseSchema,
10
+ GitLabCreateMRDiscussionReplyRequestSchema,
11
+ GitLabCreateMRDiscussionReplyResponseSchema
9
12
  )
10
13
  from ai_review.clients.gitlab.mr.schema.notes import (
14
+ GitLabNoteSchema,
11
15
  GitLabGetMRNotesQuerySchema,
12
16
  GitLabGetMRNotesResponseSchema,
13
17
  GitLabCreateMRNoteRequestSchema,
14
18
  GitLabCreateMRNoteResponseSchema,
15
19
  )
16
20
  from ai_review.clients.gitlab.mr.types import GitLabMergeRequestsHTTPClientProtocol
21
+ from ai_review.clients.gitlab.tools import gitlab_has_next_page
22
+ from ai_review.config import settings
17
23
  from ai_review.libs.http.client import HTTPClient
18
24
  from ai_review.libs.http.handlers import handle_http_error, HTTPClientError
25
+ from ai_review.libs.http.paginate import paginate
19
26
 
20
27
 
21
28
  class GitLabMergeRequestsHTTPClientError(HTTPClientError):
@@ -77,6 +84,19 @@ class GitLabMergeRequestsHTTPClient(HTTPClient, GitLabMergeRequestsHTTPClientPro
77
84
  json=request.model_dump(),
78
85
  )
79
86
 
87
+ @handle_http_error(client="GitLabMergeRequestsHTTPClient", exception=GitLabMergeRequestsHTTPClientError)
88
+ async def create_discussion_reply_api(
89
+ self,
90
+ project_id: str,
91
+ merge_request_id: str,
92
+ discussion_id: str,
93
+ request: GitLabCreateMRDiscussionReplyRequestSchema,
94
+ ) -> Response:
95
+ return await self.post(
96
+ f"/api/v4/projects/{project_id}/merge_requests/{merge_request_id}/discussions/{discussion_id}/notes",
97
+ json=request.model_dump(),
98
+ )
99
+
80
100
  async def get_changes(self, project_id: str, merge_request_id: str) -> GitLabGetMRChangesResponseSchema:
81
101
  response = await self.get_changes_api(project_id, merge_request_id)
82
102
  return GitLabGetMRChangesResponseSchema.model_validate_json(response.text)
@@ -86,18 +106,42 @@ class GitLabMergeRequestsHTTPClient(HTTPClient, GitLabMergeRequestsHTTPClientPro
86
106
  project_id: str,
87
107
  merge_request_id: str
88
108
  ) -> GitLabGetMRNotesResponseSchema:
89
- query = GitLabGetMRNotesQuerySchema(per_page=100)
90
- response = await self.get_notes_api(project_id, merge_request_id, query)
91
- return GitLabGetMRNotesResponseSchema.model_validate_json(response.text)
109
+ async def fetch_page(page: int) -> Response:
110
+ query = GitLabGetMRNotesQuerySchema(page=page, per_page=settings.vcs.pagination.per_page)
111
+ return await self.get_notes_api(project_id, merge_request_id, query)
112
+
113
+ def extract_items(response: Response) -> list[GitLabNoteSchema]:
114
+ result = GitLabGetMRNotesResponseSchema.model_validate_json(response.text)
115
+ return result.root
116
+
117
+ items = await paginate(
118
+ max_pages=settings.vcs.pagination.max_pages,
119
+ fetch_page=fetch_page,
120
+ extract_items=extract_items,
121
+ has_next_page=gitlab_has_next_page
122
+ )
123
+ return GitLabGetMRNotesResponseSchema(root=items)
92
124
 
93
125
  async def get_discussions(
94
126
  self,
95
127
  project_id: str,
96
128
  merge_request_id: str
97
129
  ) -> GitLabGetMRDiscussionsResponseSchema:
98
- query = GitLabGetMRDiscussionsQuerySchema(per_page=100)
99
- response = await self.get_discussions_api(project_id, merge_request_id, query)
100
- return GitLabGetMRDiscussionsResponseSchema.model_validate_json(response.text)
130
+ async def fetch_page(page: int) -> Response:
131
+ query = GitLabGetMRDiscussionsQuerySchema(page=page, per_page=settings.vcs.pagination.per_page)
132
+ return await self.get_discussions_api(project_id, merge_request_id, query)
133
+
134
+ def extract_items(response: Response) -> list[GitLabDiscussionSchema]:
135
+ result = GitLabGetMRDiscussionsResponseSchema.model_validate_json(response.text)
136
+ return result.root
137
+
138
+ items = await paginate(
139
+ max_pages=settings.vcs.pagination.max_pages,
140
+ fetch_page=fetch_page,
141
+ extract_items=extract_items,
142
+ has_next_page=gitlab_has_next_page
143
+ )
144
+ return GitLabGetMRDiscussionsResponseSchema(root=items)
101
145
 
102
146
  async def create_note(
103
147
  self,
@@ -125,3 +169,19 @@ class GitLabMergeRequestsHTTPClient(HTTPClient, GitLabMergeRequestsHTTPClientPro
125
169
  merge_request_id=merge_request_id
126
170
  )
127
171
  return GitLabCreateMRDiscussionResponseSchema.model_validate_json(response.text)
172
+
173
+ async def create_discussion_reply(
174
+ self,
175
+ project_id: str,
176
+ merge_request_id: str,
177
+ discussion_id: str,
178
+ body: str,
179
+ ) -> GitLabCreateMRDiscussionReplyResponseSchema:
180
+ request = GitLabCreateMRDiscussionReplyRequestSchema(body=body)
181
+ response = await self.create_discussion_reply_api(
182
+ project_id=project_id,
183
+ merge_request_id=merge_request_id,
184
+ discussion_id=discussion_id,
185
+ request=request,
186
+ )
187
+ return GitLabCreateMRDiscussionReplyResponseSchema.model_validate_json(response.text)
@@ -1,10 +1,6 @@
1
1
  from pydantic import BaseModel, Field
2
2
 
3
-
4
- class GitLabUserSchema(BaseModel):
5
- id: int
6
- name: str
7
- username: str
3
+ from ai_review.clients.gitlab.mr.schema.user import GitLabUserSchema
8
4
 
9
5
 
10
6
  class GitLabDiffRefsSchema(BaseModel):
@@ -1,13 +1,8 @@
1
- from pydantic import BaseModel, RootModel
1
+ from pydantic import BaseModel, RootModel, Field
2
2
 
3
3
  from ai_review.clients.gitlab.mr.schema.notes import GitLabNoteSchema
4
4
 
5
5
 
6
- class GitLabDiscussionSchema(BaseModel):
7
- id: str
8
- notes: list[GitLabNoteSchema]
9
-
10
-
11
6
  class GitLabDiscussionPositionSchema(BaseModel):
12
7
  position_type: str = "text"
13
8
  base_sha: str
@@ -17,8 +12,15 @@ class GitLabDiscussionPositionSchema(BaseModel):
17
12
  new_line: int
18
13
 
19
14
 
15
+ class GitLabDiscussionSchema(BaseModel):
16
+ id: str
17
+ notes: list[GitLabNoteSchema]
18
+ position: GitLabDiscussionPositionSchema | None = None
19
+
20
+
20
21
  class GitLabGetMRDiscussionsQuerySchema(BaseModel):
21
- per_page: int
22
+ page: int = 1
23
+ per_page: int = 100
22
24
 
23
25
 
24
26
  class GitLabGetMRDiscussionsResponseSchema(RootModel[list[GitLabDiscussionSchema]]):
@@ -32,4 +34,13 @@ class GitLabCreateMRDiscussionRequestSchema(BaseModel):
32
34
 
33
35
  class GitLabCreateMRDiscussionResponseSchema(BaseModel):
34
36
  id: str
35
- body: str | None = None
37
+ notes: list[GitLabNoteSchema] = Field(default_factory=list)
38
+
39
+
40
+ class GitLabCreateMRDiscussionReplyRequestSchema(BaseModel):
41
+ body: str
42
+
43
+
44
+ class GitLabCreateMRDiscussionReplyResponseSchema(BaseModel):
45
+ id: int
46
+ body: str
@@ -1,13 +1,17 @@
1
1
  from pydantic import BaseModel, RootModel
2
2
 
3
+ from ai_review.clients.gitlab.mr.schema.user import GitLabUserSchema
4
+
3
5
 
4
6
  class GitLabNoteSchema(BaseModel):
5
7
  id: int
6
8
  body: str
9
+ author: GitLabUserSchema | None = None
7
10
 
8
11
 
9
12
  class GitLabGetMRNotesQuerySchema(BaseModel):
10
- per_page: int
13
+ page: int = 1
14
+ per_page: int = 100
11
15
 
12
16
 
13
17
  class GitLabGetMRNotesResponseSchema(RootModel[list[GitLabNoteSchema]]):
@@ -0,0 +1,7 @@
1
+ from pydantic import BaseModel
2
+
3
+
4
+ class GitLabUserSchema(BaseModel):
5
+ id: int
6
+ name: str
7
+ username: str
@@ -3,8 +3,9 @@ from typing import Protocol
3
3
  from ai_review.clients.gitlab.mr.schema.changes import GitLabGetMRChangesResponseSchema
4
4
  from ai_review.clients.gitlab.mr.schema.discussions import (
5
5
  GitLabGetMRDiscussionsResponseSchema,
6
- GitLabCreateMRDiscussionResponseSchema,
7
6
  GitLabCreateMRDiscussionRequestSchema,
7
+ GitLabCreateMRDiscussionResponseSchema,
8
+ GitLabCreateMRDiscussionReplyResponseSchema
8
9
  )
9
10
  from ai_review.clients.gitlab.mr.schema.notes import GitLabGetMRNotesResponseSchema, GitLabCreateMRNoteResponseSchema
10
11
 
@@ -14,12 +15,6 @@ class GitLabMergeRequestsHTTPClientProtocol(Protocol):
14
15
 
15
16
  async def get_notes(self, project_id: str, merge_request_id: str) -> GitLabGetMRNotesResponseSchema: ...
16
17
 
17
- async def get_discussions(
18
- self,
19
- project_id: str,
20
- merge_request_id: str
21
- ) -> GitLabGetMRDiscussionsResponseSchema: ...
22
-
23
18
  async def create_note(
24
19
  self,
25
20
  body: str,
@@ -27,9 +22,23 @@ class GitLabMergeRequestsHTTPClientProtocol(Protocol):
27
22
  merge_request_id: str,
28
23
  ) -> GitLabCreateMRNoteResponseSchema: ...
29
24
 
25
+ async def get_discussions(
26
+ self,
27
+ project_id: str,
28
+ merge_request_id: str
29
+ ) -> GitLabGetMRDiscussionsResponseSchema: ...
30
+
30
31
  async def create_discussion(
31
32
  self,
32
33
  project_id: str,
33
34
  merge_request_id: str,
34
35
  request: GitLabCreateMRDiscussionRequestSchema,
35
36
  ) -> GitLabCreateMRDiscussionResponseSchema: ...
37
+
38
+ async def create_discussion_reply(
39
+ self,
40
+ project_id: str,
41
+ merge_request_id: str,
42
+ discussion_id: str,
43
+ body: str,
44
+ ) -> GitLabCreateMRDiscussionReplyResponseSchema: ...
@@ -0,0 +1,5 @@
1
+ from httpx import Response
2
+
3
+
4
+ def gitlab_has_next_page(response: Response) -> bool:
5
+ return bool(response.headers.get("X-Next-Page"))
@@ -6,104 +6,123 @@ from pydantic import BaseModel, FilePath, Field
6
6
  from ai_review.libs.resources import load_resource
7
7
 
8
8
 
9
+ def resolve_prompt_files(files: list[FilePath] | None, default_file: str) -> list[Path]:
10
+ return files or [
11
+ load_resource(
12
+ package="ai_review.prompts",
13
+ filename=default_file,
14
+ fallback=f"ai_review/prompts/{default_file}"
15
+ )
16
+ ]
17
+
18
+
19
+ def resolve_system_prompt_files(files: list[FilePath] | None, include: bool, default_file: str) -> list[Path]:
20
+ global_files = [
21
+ load_resource(
22
+ package="ai_review.prompts",
23
+ filename=default_file,
24
+ fallback=f"ai_review/prompts/{default_file}"
25
+ )
26
+ ]
27
+
28
+ if files is None:
29
+ return global_files
30
+
31
+ if include:
32
+ return global_files + files
33
+
34
+ return files
35
+
36
+
9
37
  class PromptConfig(BaseModel):
10
38
  context: dict[str, str] = Field(default_factory=dict)
11
39
  normalize_prompts: bool = True
12
40
  context_placeholder: str = "<<{value}>>"
41
+
42
+ # --- Prompts ---
13
43
  inline_prompt_files: list[FilePath] | None = None
14
44
  context_prompt_files: list[FilePath] | None = None
15
45
  summary_prompt_files: list[FilePath] | None = None
46
+ inline_reply_prompt_files: list[FilePath] | None = None
47
+ summary_reply_prompt_files: list[FilePath] | None = None
48
+
49
+ # --- System Prompts ---
16
50
  system_inline_prompt_files: list[FilePath] | None = None
17
51
  system_context_prompt_files: list[FilePath] | None = None
18
52
  system_summary_prompt_files: list[FilePath] | None = None
53
+ system_inline_reply_prompt_files: list[FilePath] | None = None
54
+ system_summary_reply_prompt_files: list[FilePath] | None = None
55
+
56
+ # --- Include System Prompts ---
19
57
  include_inline_system_prompts: bool = True
20
58
  include_context_system_prompts: bool = True
21
59
  include_summary_system_prompts: bool = True
60
+ include_inline_reply_system_prompts: bool = True
61
+ include_summary_reply_system_prompts: bool = True
22
62
 
63
+ # --- Prompts ---
23
64
  @cached_property
24
65
  def inline_prompt_files_or_default(self) -> list[Path]:
25
- return self.inline_prompt_files or [
26
- load_resource(
27
- package="ai_review.prompts",
28
- filename="default_inline.md",
29
- fallback="ai_review/prompts/default_inline.md"
30
- )
31
- ]
66
+ return resolve_prompt_files(self.inline_prompt_files, "default_inline.md")
32
67
 
33
68
  @cached_property
34
69
  def context_prompt_files_or_default(self) -> list[Path]:
35
- return self.context_prompt_files or [
36
- load_resource(
37
- package="ai_review.prompts",
38
- filename="default_context.md",
39
- fallback="ai_review/prompts/default_context.md"
40
- )
41
- ]
70
+ return resolve_prompt_files(self.context_prompt_files, "default_context.md")
42
71
 
43
72
  @cached_property
44
73
  def summary_prompt_files_or_default(self) -> list[Path]:
45
- return self.summary_prompt_files or [
46
- load_resource(
47
- package="ai_review.prompts",
48
- filename="default_summary.md",
49
- fallback="ai_review/prompts/default_summary.md"
50
- )
51
- ]
74
+ return resolve_prompt_files(self.summary_prompt_files, "default_summary.md")
52
75
 
53
76
  @cached_property
54
- def system_inline_prompt_files_or_default(self) -> list[Path]:
55
- global_files = [
56
- load_resource(
57
- package="ai_review.prompts",
58
- filename="default_system_inline.md",
59
- fallback="ai_review/prompts/default_system_inline.md"
60
- )
61
- ]
62
-
63
- if self.system_inline_prompt_files is None:
64
- return global_files
77
+ def inline_reply_prompt_files_or_default(self) -> list[Path]:
78
+ return resolve_prompt_files(self.inline_reply_prompt_files, "default_inline_reply.md")
65
79
 
66
- if self.include_inline_system_prompts:
67
- return global_files + self.system_inline_prompt_files
80
+ @cached_property
81
+ def summary_reply_prompt_files_or_default(self) -> list[Path]:
82
+ return resolve_prompt_files(self.summary_reply_prompt_files, "default_summary_reply.md")
68
83
 
69
- return self.system_inline_prompt_files
84
+ # --- System Prompts ---
85
+ @cached_property
86
+ def system_inline_prompt_files_or_default(self) -> list[Path]:
87
+ return resolve_system_prompt_files(
88
+ files=self.system_inline_prompt_files,
89
+ include=self.include_inline_system_prompts,
90
+ default_file="default_system_inline.md"
91
+ )
70
92
 
71
93
  @cached_property
72
94
  def system_context_prompt_files_or_default(self) -> list[Path]:
73
- global_files = [
74
- load_resource(
75
- package="ai_review.prompts",
76
- filename="default_system_context.md",
77
- fallback="ai_review/prompts/default_system_context.md"
78
- )
79
- ]
80
-
81
- if self.system_context_prompt_files is None:
82
- return global_files
83
-
84
- if self.include_context_system_prompts:
85
- return global_files + self.system_context_prompt_files
86
-
87
- return self.system_context_prompt_files
95
+ return resolve_system_prompt_files(
96
+ files=self.system_context_prompt_files,
97
+ include=self.include_context_system_prompts,
98
+ default_file="default_system_context.md"
99
+ )
88
100
 
89
101
  @cached_property
90
102
  def system_summary_prompt_files_or_default(self) -> list[Path]:
91
- global_files = [
92
- load_resource(
93
- package="ai_review.prompts",
94
- filename="default_system_summary.md",
95
- fallback="ai_review/prompts/default_system_summary.md"
96
- )
97
- ]
98
-
99
- if self.system_summary_prompt_files is None:
100
- return global_files
101
-
102
- if self.include_summary_system_prompts:
103
- return global_files + self.system_summary_prompt_files
103
+ return resolve_system_prompt_files(
104
+ files=self.system_summary_prompt_files,
105
+ include=self.include_summary_system_prompts,
106
+ default_file="default_system_summary.md"
107
+ )
104
108
 
105
- return self.system_summary_prompt_files
109
+ @cached_property
110
+ def system_inline_reply_prompt_files_or_default(self) -> list[Path]:
111
+ return resolve_system_prompt_files(
112
+ files=self.system_inline_reply_prompt_files,
113
+ include=self.include_inline_reply_system_prompts,
114
+ default_file="default_system_inline_reply.md"
115
+ )
106
116
 
117
+ @cached_property
118
+ def system_summary_reply_prompt_files_or_default(self) -> list[Path]:
119
+ return resolve_system_prompt_files(
120
+ files=self.system_summary_reply_prompt_files,
121
+ include=self.include_summary_reply_system_prompts,
122
+ default_file="default_system_summary_reply.md"
123
+ )
124
+
125
+ # --- Load Prompts ---
107
126
  def load_inline(self) -> list[str]:
108
127
  return [file.read_text(encoding="utf-8") for file in self.inline_prompt_files_or_default]
109
128
 
@@ -113,6 +132,13 @@ class PromptConfig(BaseModel):
113
132
  def load_summary(self) -> list[str]:
114
133
  return [file.read_text(encoding="utf-8") for file in self.summary_prompt_files_or_default]
115
134
 
135
+ def load_inline_reply(self) -> list[str]:
136
+ return [file.read_text(encoding="utf-8") for file in self.inline_reply_prompt_files_or_default]
137
+
138
+ def load_summary_reply(self) -> list[str]:
139
+ return [file.read_text(encoding="utf-8") for file in self.summary_reply_prompt_files_or_default]
140
+
141
+ # --- Load System Prompts ---
116
142
  def load_system_inline(self) -> list[str]:
117
143
  return [file.read_text(encoding="utf-8") for file in self.system_inline_prompt_files_or_default]
118
144
 
@@ -121,3 +147,9 @@ class PromptConfig(BaseModel):
121
147
 
122
148
  def load_system_summary(self) -> list[str]:
123
149
  return [file.read_text(encoding="utf-8") for file in self.system_summary_prompt_files_or_default]
150
+
151
+ def load_system_inline_reply(self) -> list[str]:
152
+ return [file.read_text(encoding="utf-8") for file in self.system_inline_reply_prompt_files_or_default]
153
+
154
+ def load_system_summary_reply(self) -> list[str]:
155
+ return [file.read_text(encoding="utf-8") for file in self.system_summary_reply_prompt_files_or_default]
@@ -20,7 +20,9 @@ class ReviewMode(StrEnum):
20
20
  class ReviewConfig(BaseModel):
21
21
  mode: ReviewMode = ReviewMode.FULL_FILE_DIFF
22
22
  inline_tag: str = Field(default="#ai-review-inline")
23
+ inline_reply_tag: str = Field(default="#ai-review-inline-reply")
23
24
  summary_tag: str = Field(default="#ai-review-summary")
25
+ summary_reply_tag: str = Field(default="#ai-review-summary-reply")
24
26
  context_lines: int = Field(default=10, ge=0)
25
27
  allow_changes: list[str] = Field(default_factory=list)
26
28
  ignore_changes: list[str] = Field(default_factory=list)
@@ -5,11 +5,13 @@ from pydantic import BaseModel, Field
5
5
  from ai_review.libs.config.vcs.bitbucket import BitbucketPipelineConfig, BitbucketHTTPClientConfig
6
6
  from ai_review.libs.config.vcs.github import GitHubPipelineConfig, GitHubHTTPClientConfig
7
7
  from ai_review.libs.config.vcs.gitlab import GitLabPipelineConfig, GitLabHTTPClientConfig
8
+ from ai_review.libs.config.vcs.pagination import VCSPaginationConfig
8
9
  from ai_review.libs.constants.vcs_provider import VCSProvider
9
10
 
10
11
 
11
12
  class VCSConfigBase(BaseModel):
12
13
  provider: VCSProvider
14
+ pagination: VCSPaginationConfig = VCSPaginationConfig()
13
15
 
14
16
 
15
17
  class GitLabVCSConfig(VCSConfigBase):
@@ -0,0 +1,6 @@
1
+ from pydantic import BaseModel, Field
2
+
3
+
4
+ class VCSPaginationConfig(BaseModel):
5
+ per_page: int = Field(default=100, ge=1, le=100)
6
+ max_pages: int = Field(default=5, ge=1)
@@ -0,0 +1,43 @@
1
+ from typing import Awaitable, Callable, TypeVar
2
+
3
+ from httpx import Response
4
+ from pydantic import BaseModel
5
+
6
+ from ai_review.libs.logger import get_logger
7
+
8
+ T = TypeVar("T", bound=BaseModel)
9
+
10
+ logger = get_logger("PAGINATE")
11
+
12
+
13
+ async def paginate(
14
+ fetch_page: Callable[[int], Awaitable[Response]],
15
+ extract_items: Callable[[Response], list[T]],
16
+ has_next_page: Callable[[Response], bool],
17
+ max_pages: int | None = None,
18
+ ) -> list[T]:
19
+ page = 1
20
+ items: list[T] = []
21
+
22
+ while True:
23
+ response = await fetch_page(page)
24
+
25
+ try:
26
+ extracted = extract_items(response)
27
+ except Exception as error:
28
+ logger.error(f"Failed to extract items on {page=}")
29
+ raise RuntimeError(f"Failed to extract items on {page=}") from error
30
+
31
+ logger.debug(f"Page {page}: extracted {len(extracted)} items (total={len(items) + len(extracted)})")
32
+ items.extend(extracted)
33
+
34
+ if not has_next_page(response):
35
+ logger.debug(f"Pagination finished after {page} page(s), total items={len(items)}")
36
+ break
37
+
38
+ page += 1
39
+ if max_pages and (page > max_pages):
40
+ logger.error(f"Pagination exceeded {max_pages=}")
41
+ raise RuntimeError(f"Pagination exceeded {max_pages=}")
42
+
43
+ return items
@@ -0,0 +1,60 @@
1
+ import re
2
+ from typing import TypeVar, Generic, Type
3
+
4
+ from pydantic import BaseModel, ValidationError
5
+
6
+ from ai_review.libs.json import sanitize_json_string
7
+ from ai_review.libs.logger import get_logger
8
+
9
+ logger = get_logger("LLM_JSON_PARSER")
10
+
11
+ T = TypeVar("T", bound=BaseModel)
12
+
13
+ CLEAN_JSON_BLOCK_RE = re.compile(r"```(?:json)?(.*?)```", re.DOTALL | re.IGNORECASE)
14
+
15
+
16
+ class LLMOutputJSONParser(Generic[T]):
17
+ """Reusable JSON parser for LLM responses."""
18
+
19
+ def __init__(self, model: Type[T]):
20
+ self.model = model
21
+ self.model_name = self.model.__name__
22
+
23
+ def try_parse(self, raw: str) -> T | None:
24
+ logger.debug(f"[{self.model_name}] Attempting JSON parse (len={len(raw)})")
25
+
26
+ try:
27
+ return self.model.model_validate_json(raw)
28
+ except ValidationError as error:
29
+ logger.warning(f"[{self.model_name}] Raw JSON parse failed: {error}")
30
+ cleaned = sanitize_json_string(raw)
31
+
32
+ if cleaned != raw:
33
+ logger.debug(f"[{self.model_name}] Sanitized JSON differs, retrying parse...")
34
+ try:
35
+ return self.model.model_validate_json(cleaned)
36
+ except ValidationError as error:
37
+ logger.warning(f"[{self.model_name}] Sanitized JSON still invalid: {error}")
38
+ return None
39
+ else:
40
+ logger.debug(f"[{self.model_name}] Sanitized JSON identical — skipping retry")
41
+ return None
42
+
43
+ def parse_output(self, output: str) -> T | None:
44
+ output = (output or "").strip()
45
+ if not output:
46
+ logger.warning(f"[{self.model_name}] Empty LLM output")
47
+ return None
48
+
49
+ logger.debug(f"[{self.model_name}] Parsing output (len={len(output)})")
50
+
51
+ if match := CLEAN_JSON_BLOCK_RE.search(output):
52
+ logger.debug(f"[{self.model_name}] Found fenced JSON block, extracting...")
53
+ output = match.group(1).strip()
54
+
55
+ if parsed := self.try_parse(output):
56
+ logger.info(f"[{self.model_name}] Successfully parsed")
57
+ return parsed
58
+
59
+ logger.error(f"[{self.model_name}] No valid JSON found in output")
60
+ return None
@@ -0,0 +1,10 @@
1
+ You are an AI assistant replying to a specific inline code review comment.
2
+
3
+ Use the conversation (`## Conversation`) and code diff (`## Diff`) to continue the discussion constructively.
4
+
5
+ Guidelines:
6
+
7
+ - Focus only on the latest comment and relevant code context.
8
+ - Keep your tone concise, professional, and technical (1–2 sentences).
9
+ - If a code change is needed, include it in "suggestion" — provide only the replacement code.
10
+ - If no further action or clarification is required, output exactly: No reply.
@@ -0,0 +1,14 @@
1
+ You are an AI assistant participating in a summary code review discussion.
2
+
3
+ Use the previous conversation (`## Conversation`) and code changes (`## Changes`) to continue the discussion
4
+ constructively.
5
+
6
+ Guidelines:
7
+
8
+ - Act as a **technical reviewer**, not the code author.
9
+ - Keep your tone concise, professional, and focused (1–3 sentences).
10
+ - Address the latest user comment directly, providing clarification, reasoning, or an actionable suggestion.
11
+ - If the comment contains a request or implies an action (e.g. adding tests, refactoring, or improving validation),
12
+ provide a clear recommendation or short illustrative code snippet.
13
+ - Avoid greetings, acknowledgements, or repeating earlier feedback.
14
+ - If no reply is needed, write exactly: `No reply`.
@@ -0,0 +1,31 @@
1
+ You are an AI assistant participating in a code review discussion.
2
+
3
+ Return ONLY a valid JSON object representing a single inline reply to the current comment thread.
4
+
5
+ Format:
6
+
7
+ ```json
8
+ {
9
+ "message": "<short reply message to the comment thread>",
10
+ "suggestion": "<replacement code block without markdown, or null if not applicable>"
11
+ }
12
+ ```
13
+
14
+ Guidelines:
15
+
16
+ - Output must be exactly one JSON object, not an array or text block.
17
+ - "message" — required, non-empty, short (1–2 sentences), professional, and focused on the specific comment.
18
+ - "suggestion" — optional:
19
+ - If suggesting a fix or refactor, provide only the replacement code (no markdown, no explanations).
20
+ - Maintain indentation and style consistent with the surrounding diff.
21
+ - If no code change is appropriate, use null.
22
+ - Do not quote previous comments or restate context.
23
+ - Never include any extra text outside the JSON object.
24
+ - If no meaningful reply is needed, return:
25
+
26
+ ```json
27
+ {
28
+ "message": "No reply.",
29
+ "suggestion": null
30
+ }
31
+ ```