uis-sprint-report 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of uis-sprint-report might be problematic. Click here for more details.
- uis-sprint-report-0.1.0/LICENSE +21 -0
- uis-sprint-report-0.1.0/PKG-INFO +80 -0
- uis-sprint-report-0.1.0/README.md +64 -0
- uis-sprint-report-0.1.0/demo/__init__.py +1 -0
- uis-sprint-report-0.1.0/demo/config.py +12 -0
- uis-sprint-report-0.1.0/demo/demo.py +118 -0
- uis-sprint-report-0.1.0/demo/embeddings.py +43 -0
- uis-sprint-report-0.1.0/demo/main.py +73 -0
- uis-sprint-report-0.1.0/demo/models.py +16 -0
- uis-sprint-report-0.1.0/demo/prompts.py +9 -0
- uis-sprint-report-0.1.0/demo/utils.py +33 -0
- uis-sprint-report-0.1.0/setup.cfg +4 -0
- uis-sprint-report-0.1.0/setup.py +38 -0
- uis-sprint-report-0.1.0/tests/__init__.py +0 -0
- uis-sprint-report-0.1.0/tests/test_demo.py +49 -0
- uis-sprint-report-0.1.0/tests/test_embeddings.py +32 -0
- uis-sprint-report-0.1.0/tests/test_utils.py +33 -0
- uis-sprint-report-0.1.0/uis_sprint_report.egg-info/PKG-INFO +80 -0
- uis-sprint-report-0.1.0/uis_sprint_report.egg-info/SOURCES.txt +20 -0
- uis-sprint-report-0.1.0/uis_sprint_report.egg-info/dependency_links.txt +1 -0
- uis-sprint-report-0.1.0/uis_sprint_report.egg-info/requires.txt +14 -0
- uis-sprint-report-0.1.0/uis_sprint_report.egg-info/top_level.txt +2 -0
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2024 E. Evstafiev
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
Metadata-Version: 2.1
|
|
2
|
+
Name: uis-sprint-report
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: A Python package for generating sprint reports and managing sprint activities at University Information Services.
|
|
5
|
+
Home-page: https://gitlab.developers.cam.ac.uk/ee345/demo
|
|
6
|
+
Author: Eugene Evstafev
|
|
7
|
+
Author-email: ee345@cam.ac.uk
|
|
8
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
9
|
+
Classifier: Development Status :: 4 - Beta
|
|
10
|
+
Classifier: Intended Audience :: Developers
|
|
11
|
+
Classifier: Programming Language :: Python :: 3
|
|
12
|
+
Classifier: Operating System :: OS Independent
|
|
13
|
+
Requires-Python: >=3.7
|
|
14
|
+
Description-Content-Type: text/markdown
|
|
15
|
+
License-File: LICENSE
|
|
16
|
+
|
|
17
|
+
# Project Overview
|
|
18
|
+
|
|
19
|
+
This project leverages the capabilities of the LangChain library to interact with GitLab issues and generate sprint report presentations. The system retrieves issue data from a specified GitLab group, processes the information, and summarizes the status of sprint goals in a PowerPoint presentation.
|
|
20
|
+
|
|
21
|
+
## Features
|
|
22
|
+
|
|
23
|
+
- Retrieve issues from GitLab using a specific label and iteration ID.
|
|
24
|
+
- Analyze and categorize issues based on sprint goals.
|
|
25
|
+
- Generate summaries and status reports for sprint goals.
|
|
26
|
+
- Create PowerPoint presentations to visually present the sprint report.
|
|
27
|
+
|
|
28
|
+
## Prerequisites
|
|
29
|
+
|
|
30
|
+
Before you can run this project, you need to have Python installed on your machine. Python 3.6 or higher is recommended. You also need to ensure that Git is installed if you need to clone the repository.
|
|
31
|
+
|
|
32
|
+
## Installation
|
|
33
|
+
|
|
34
|
+
1. **Clone the Repository**:
|
|
35
|
+
```bash
|
|
36
|
+
git clone https://gitlab.developers.cam.ac.uk/ee345/demo.git
|
|
37
|
+
cd demo
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
2. **Install Required Libraries**:
|
|
41
|
+
Ensure you have `pip` installed and then run:
|
|
42
|
+
```bash
|
|
43
|
+
pip install -r requirements.txt
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
3. **Download and Setup Ollama Model**:
|
|
47
|
+
Download the Ollama model from [Ollama Download](https://ollama.com/download) and [Mistral Model](https://ollama.com/library/mistral). Follow the instructions on the Ollama website for setting up the model.
|
|
48
|
+
|
|
49
|
+
## Usage
|
|
50
|
+
|
|
51
|
+
To run the script, use the following command. Replace the placeholder values with actual data like the GitLab access token and other parameters as required:
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
python main.py --token YOUR_GITLAB_ACCESS_TOKEN --gitlab_url https://gitlab.developers.cam.ac.uk/api/v4 --iteration_id=368 --goals "Front End Error Reporting,Gain access to account data (held in Entra ID and in our own systems) to start to understand user breakdown/profiles between Raven/Azure"
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
### Parameters Description
|
|
58
|
+
|
|
59
|
+
- `--token`: GitLab access token for authentication. This is necessary to access the GitLab API securely.
|
|
60
|
+
- `--gitlab_url`: The base URL of your GitLab instance API. Default is `https://gitlab.developers.cam.ac.uk/api/v4`.
|
|
61
|
+
- `--group_id`: The ID of the GitLab group from which issues are to be fetched. Default is `5`.
|
|
62
|
+
- `--labels`: URL-encoded string of labels used to filter issues by specific criteria. Default is `team%3A%3AIdentity`.
|
|
63
|
+
- `--iteration_id`: The ID of the specific iteration to filter issues relevant to a particular sprint. Leave empty if not using iteration-based filtering.
|
|
64
|
+
- `--goals`: A comma-separated list of sprint goals to analyze. Each goal should be clearly defined.
|
|
65
|
+
- `--presentation_name`: The name of the output PowerPoint file where the sprint report will be saved. Default is `demo.pptx`.
|
|
66
|
+
- `--chunk_size`: The size of text chunks in characters when splitting documents for processing. Default is `500`.
|
|
67
|
+
- `--chunk_overlap`: The overlap of text chunks in characters when splitting documents. Default is `0`.
|
|
68
|
+
- `--search_type`: The type of search to perform when retrieving documents. Default is `mmr` which stands for Maximal Marginal Relevance.
|
|
69
|
+
- `--search_kwargs`: Additional keyword arguments in JSON format to configure the search behavior. Default is `{"k": 8}`, where `k` is the number of documents to retrieve.
|
|
70
|
+
- `--cache_folder`: The directory to use for caching data such as embeddings. Default is `cache`.
|
|
71
|
+
- `--model`: The language model to use, specified by name. Default is `mistral`.
|
|
72
|
+
- `--max_tokens`: The maximum number of tokens to generate from the language model in a single request. Default is `1500`.
|
|
73
|
+
|
|
74
|
+
## Contributing
|
|
75
|
+
|
|
76
|
+
Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change.
|
|
77
|
+
|
|
78
|
+
## License
|
|
79
|
+
|
|
80
|
+
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE) file for details.
|
|
@@ -0,0 +1,64 @@
|
|
|
1
|
+
# Project Overview
|
|
2
|
+
|
|
3
|
+
This project leverages the capabilities of the LangChain library to interact with GitLab issues and generate sprint report presentations. The system retrieves issue data from a specified GitLab group, processes the information, and summarizes the status of sprint goals in a PowerPoint presentation.
|
|
4
|
+
|
|
5
|
+
## Features
|
|
6
|
+
|
|
7
|
+
- Retrieve issues from GitLab using a specific label and iteration ID.
|
|
8
|
+
- Analyze and categorize issues based on sprint goals.
|
|
9
|
+
- Generate summaries and status reports for sprint goals.
|
|
10
|
+
- Create PowerPoint presentations to visually present the sprint report.
|
|
11
|
+
|
|
12
|
+
## Prerequisites
|
|
13
|
+
|
|
14
|
+
Before you can run this project, you need to have Python installed on your machine. Python 3.6 or higher is recommended. You also need to ensure that Git is installed if you need to clone the repository.
|
|
15
|
+
|
|
16
|
+
## Installation
|
|
17
|
+
|
|
18
|
+
1. **Clone the Repository**:
|
|
19
|
+
```bash
|
|
20
|
+
git clone https://gitlab.developers.cam.ac.uk/ee345/demo.git
|
|
21
|
+
cd demo
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
2. **Install Required Libraries**:
|
|
25
|
+
Ensure you have `pip` installed and then run:
|
|
26
|
+
```bash
|
|
27
|
+
pip install -r requirements.txt
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
3. **Download and Setup Ollama Model**:
|
|
31
|
+
Download the Ollama model from [Ollama Download](https://ollama.com/download) and [Mistral Model](https://ollama.com/library/mistral). Follow the instructions on the Ollama website for setting up the model.
|
|
32
|
+
|
|
33
|
+
## Usage
|
|
34
|
+
|
|
35
|
+
To run the script, use the following command. Replace the placeholder values with actual data like the GitLab access token and other parameters as required:
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
python main.py --token YOUR_GITLAB_ACCESS_TOKEN --gitlab_url https://gitlab.developers.cam.ac.uk/api/v4 --iteration_id=368 --goals "Front End Error Reporting,Gain access to account data (held in Entra ID and in our own systems) to start to understand user breakdown/profiles between Raven/Azure"
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
### Parameters Description
|
|
42
|
+
|
|
43
|
+
- `--token`: GitLab access token for authentication. This is necessary to access the GitLab API securely.
|
|
44
|
+
- `--gitlab_url`: The base URL of your GitLab instance API. Default is `https://gitlab.developers.cam.ac.uk/api/v4`.
|
|
45
|
+
- `--group_id`: The ID of the GitLab group from which issues are to be fetched. Default is `5`.
|
|
46
|
+
- `--labels`: URL-encoded string of labels used to filter issues by specific criteria. Default is `team%3A%3AIdentity`.
|
|
47
|
+
- `--iteration_id`: The ID of the specific iteration to filter issues relevant to a particular sprint. Leave empty if not using iteration-based filtering.
|
|
48
|
+
- `--goals`: A comma-separated list of sprint goals to analyze. Each goal should be clearly defined.
|
|
49
|
+
- `--presentation_name`: The name of the output PowerPoint file where the sprint report will be saved. Default is `demo.pptx`.
|
|
50
|
+
- `--chunk_size`: The size of text chunks in characters when splitting documents for processing. Default is `500`.
|
|
51
|
+
- `--chunk_overlap`: The overlap of text chunks in characters when splitting documents. Default is `0`.
|
|
52
|
+
- `--search_type`: The type of search to perform when retrieving documents. Default is `mmr` which stands for Maximal Marginal Relevance.
|
|
53
|
+
- `--search_kwargs`: Additional keyword arguments in JSON format to configure the search behavior. Default is `{"k": 8}`, where `k` is the number of documents to retrieve.
|
|
54
|
+
- `--cache_folder`: The directory to use for caching data such as embeddings. Default is `cache`.
|
|
55
|
+
- `--model`: The language model to use, specified by name. Default is `mistral`.
|
|
56
|
+
- `--max_tokens`: The maximum number of tokens to generate from the language model in a single request. Default is `1500`.
|
|
57
|
+
|
|
58
|
+
## Contributing
|
|
59
|
+
|
|
60
|
+
Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change.
|
|
61
|
+
|
|
62
|
+
## License
|
|
63
|
+
|
|
64
|
+
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE) file for details.
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
from .main import demo
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
CACHE_FILE = ".cache"
|
|
2
|
+
CHUNK_SIZE = 500
|
|
3
|
+
CHUNK_OVERLAP = 0
|
|
4
|
+
DEFAULT_API_BASE = "https://gitlab.developers.cam.ac.uk/"
|
|
5
|
+
DEFAULT_MODEL = "mistral:latest"
|
|
6
|
+
DEFAULT_COMMAND = "report"
|
|
7
|
+
DEFAULT_LABELS = "team::Identity"
|
|
8
|
+
DEFAULT_GROUP_ID = 5
|
|
9
|
+
DEFAULT_ITERATION_ID = 383
|
|
10
|
+
DEFAULT_PPTX_FILE_NAME = "../sprint_goals.pptx"
|
|
11
|
+
MAX_TOKENS = 1500
|
|
12
|
+
MAX_ATTEMPTS = 5
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
from rich import print
|
|
2
|
+
from rich.table import Table
|
|
3
|
+
from pptx import Presentation
|
|
4
|
+
|
|
5
|
+
from langchain_core.output_parsers import PydanticOutputParser
|
|
6
|
+
from langchain_core.prompts import ChatPromptTemplate
|
|
7
|
+
from langchain_core.runnables import RunnablePassthrough
|
|
8
|
+
from langchain_community.chat_models import ChatOllama
|
|
9
|
+
from .prompts import prompts
|
|
10
|
+
from .models import SprintReport, ResponseModel
|
|
11
|
+
|
|
12
|
+
|
|
13
|
+
def prepare_parser_and_prompt(model):
|
|
14
|
+
"""Prepare parser and prompt template for structured outputs."""
|
|
15
|
+
parser = PydanticOutputParser(pydantic_object=model)
|
|
16
|
+
prompt_template = ChatPromptTemplate.from_messages([
|
|
17
|
+
("system",
|
|
18
|
+
"Answer the user query. Wrap the output in `json` tags\n{format_instructions}. The context is {context}"),
|
|
19
|
+
("human", "{query}"),
|
|
20
|
+
]).partial(format_instructions=parser.get_format_instructions())
|
|
21
|
+
|
|
22
|
+
return parser, prompt_template
|
|
23
|
+
|
|
24
|
+
|
|
25
|
+
def prepare_chain(llm, retriever, model):
|
|
26
|
+
parser, prompt_template = prepare_parser_and_prompt(model)
|
|
27
|
+
return {"context": retriever, "query": RunnablePassthrough()} | prompt_template | llm | parser
|
|
28
|
+
|
|
29
|
+
|
|
30
|
+
def invoke_chain(chain, prompt, max_attempts):
|
|
31
|
+
count_attempts = 0
|
|
32
|
+
while count_attempts < max_attempts:
|
|
33
|
+
try:
|
|
34
|
+
response = chain.invoke(prompt)
|
|
35
|
+
return response
|
|
36
|
+
except Exception as e:
|
|
37
|
+
count_attempts += 1
|
|
38
|
+
print("[red]❌ Unable to generate answer[/red]")
|
|
39
|
+
|
|
40
|
+
|
|
41
|
+
def get_report(llm, embeddings, max_attempts, sprint_goals=None):
|
|
42
|
+
chain = prepare_chain(llm, embeddings, SprintReport)
|
|
43
|
+
prompt = prompts['generate_report']['question']
|
|
44
|
+
if sprint_goals is not None:
|
|
45
|
+
prompt += prompts['generate_report']['additional_info'] + sprint_goals
|
|
46
|
+
report = invoke_chain(chain, prompt, max_attempts)
|
|
47
|
+
return report
|
|
48
|
+
|
|
49
|
+
|
|
50
|
+
def report(llm, embeddings, max_attempts):
|
|
51
|
+
print("[bold]Generating report...[/bold]")
|
|
52
|
+
report = get_report(llm, embeddings, max_attempts)
|
|
53
|
+
|
|
54
|
+
table = Table(title="Sprint Activities Report")
|
|
55
|
+
|
|
56
|
+
table.add_column("Title", style="cyan", no_wrap=True)
|
|
57
|
+
table.add_column("Description", style="magenta")
|
|
58
|
+
table.add_column("Status", style="green")
|
|
59
|
+
|
|
60
|
+
for activity in report.activities:
|
|
61
|
+
table.add_row(
|
|
62
|
+
activity.title,
|
|
63
|
+
activity.brief_desc_status,
|
|
64
|
+
activity.status
|
|
65
|
+
)
|
|
66
|
+
|
|
67
|
+
print("[green]✅ Report generated[/green]")
|
|
68
|
+
print(table)
|
|
69
|
+
|
|
70
|
+
|
|
71
|
+
def pptx(llm, embeddings, max_attempts, sprint_goals, pptx_file_name):
|
|
72
|
+
report = get_report(llm, embeddings, max_attempts, sprint_goals)
|
|
73
|
+
|
|
74
|
+
prs = Presentation()
|
|
75
|
+
slide_layout = prs.slide_layouts[1]
|
|
76
|
+
|
|
77
|
+
slide = prs.slides.add_slide(slide_layout)
|
|
78
|
+
title = slide.shapes.title
|
|
79
|
+
content = slide.placeholders[1]
|
|
80
|
+
|
|
81
|
+
title.text = "Sprint Activities Report"
|
|
82
|
+
content.text = "Activities Summary:\n"
|
|
83
|
+
|
|
84
|
+
for activity in report.activities:
|
|
85
|
+
content.text += f"- {activity.title}: {activity.brief_desc_status} (Status: {activity.status})\n"
|
|
86
|
+
|
|
87
|
+
prs.save(pptx_file_name)
|
|
88
|
+
print("[green]✅ PowerPoint report generated[/green]")
|
|
89
|
+
|
|
90
|
+
|
|
91
|
+
def chat(llm, embeddings, max_attempts):
|
|
92
|
+
is_end = False
|
|
93
|
+
while not is_end:
|
|
94
|
+
user_input = input("You: ")
|
|
95
|
+
if user_input == "exit" or user_input == "quit" or user_input == "q":
|
|
96
|
+
is_end = True
|
|
97
|
+
print("[bold]👋 Goodbye![/bold]")
|
|
98
|
+
break
|
|
99
|
+
chain = prepare_chain(llm, embeddings, ResponseModel)
|
|
100
|
+
response = invoke_chain(
|
|
101
|
+
chain,
|
|
102
|
+
user_input + "\n" + prompts['chat']['question'],
|
|
103
|
+
max_attempts
|
|
104
|
+
)
|
|
105
|
+
print(f"Bot: {response}")
|
|
106
|
+
|
|
107
|
+
|
|
108
|
+
def execute_command(command, embeddings, model, max_tokens, max_attempts, sprint_goals, pptx_file_name):
|
|
109
|
+
llm = ChatOllama(model=model, max_tokens=max_tokens)
|
|
110
|
+
if command == "report":
|
|
111
|
+
report(llm, embeddings, max_attempts)
|
|
112
|
+
elif command == "pptx":
|
|
113
|
+
pptx(llm, embeddings, max_attempts, sprint_goals, pptx_file_name)
|
|
114
|
+
elif command == "chat":
|
|
115
|
+
chat(llm, embeddings, max_attempts)
|
|
116
|
+
else:
|
|
117
|
+
print("Invalid command")
|
|
118
|
+
raise Exception("Invalid command")
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
from rich import print
|
|
2
|
+
from .utils import manage_cache_file
|
|
3
|
+
from get_gitlab_issues import get_gitlab_issues, get_gitlab_issue
|
|
4
|
+
|
|
5
|
+
from langchain_community.document_loaders import TextLoader
|
|
6
|
+
from langchain_text_splitters import CharacterTextSplitter
|
|
7
|
+
from langchain_community.vectorstores import FAISS
|
|
8
|
+
|
|
9
|
+
|
|
10
|
+
def generate_embeddings(
|
|
11
|
+
api_base,
|
|
12
|
+
access_token,
|
|
13
|
+
labels,
|
|
14
|
+
group_id,
|
|
15
|
+
iteration_id,
|
|
16
|
+
cache_file,
|
|
17
|
+
chunk_size,
|
|
18
|
+
chunk_overlap,
|
|
19
|
+
embedding_model,
|
|
20
|
+
sprint_goals
|
|
21
|
+
):
|
|
22
|
+
print("[green]Recieveing issues from GitLab...[/green]")
|
|
23
|
+
issues = get_gitlab_issues(access_token, api_base, group_id, labels, iteration_id)
|
|
24
|
+
board = []
|
|
25
|
+
for issue in issues:
|
|
26
|
+
issue = issue.attributes
|
|
27
|
+
board.append(issue)
|
|
28
|
+
|
|
29
|
+
print("[green]✅ Issues received[/green]")
|
|
30
|
+
manage_cache_file(create=True, content=sprint_goals+"\n "+str(board))
|
|
31
|
+
|
|
32
|
+
print("[green]Generate embeddings...[/green]")
|
|
33
|
+
loader = TextLoader(cache_file)
|
|
34
|
+
documents = loader.load()
|
|
35
|
+
splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
|
|
36
|
+
texts = splitter.split_documents(documents)
|
|
37
|
+
|
|
38
|
+
embedding = embedding_model
|
|
39
|
+
vector_store = FAISS.from_documents(texts, embedding)
|
|
40
|
+
retriever = vector_store.as_retriever()
|
|
41
|
+
manage_cache_file(content="")
|
|
42
|
+
print("[green]✅ Embeddings generated[/green]")
|
|
43
|
+
return retriever
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
import click
|
|
2
|
+
from .utils import check_gitlab_credentials, check_ollama
|
|
3
|
+
from .embeddings import generate_embeddings
|
|
4
|
+
from langchain_huggingface import HuggingFaceEmbeddings
|
|
5
|
+
|
|
6
|
+
from .demo import execute_command
|
|
7
|
+
from .config import (
|
|
8
|
+
DEFAULT_API_BASE,
|
|
9
|
+
DEFAULT_COMMAND,
|
|
10
|
+
DEFAULT_GROUP_ID,
|
|
11
|
+
DEFAULT_ITERATION_ID,
|
|
12
|
+
DEFAULT_LABELS,
|
|
13
|
+
DEFAULT_MODEL,
|
|
14
|
+
DEFAULT_PPTX_FILE_NAME,
|
|
15
|
+
CACHE_FILE,
|
|
16
|
+
CHUNK_SIZE,
|
|
17
|
+
CHUNK_OVERLAP,
|
|
18
|
+
MAX_TOKENS,
|
|
19
|
+
MAX_ATTEMPTS
|
|
20
|
+
)
|
|
21
|
+
|
|
22
|
+
|
|
23
|
+
@click.command()
|
|
24
|
+
@click.option("--api-base", default=DEFAULT_API_BASE, metavar="BASE_URL")
|
|
25
|
+
@click.option("--access-token", default=None, metavar="ACCESS_TOKEN")
|
|
26
|
+
@click.option("--command", default=DEFAULT_COMMAND, metavar="COMMAND")
|
|
27
|
+
@click.option("--group-id", default=DEFAULT_GROUP_ID, metavar="GROUP_ID")
|
|
28
|
+
@click.option("--iteration-id", default=DEFAULT_ITERATION_ID, metavar="ITERATION_ID")
|
|
29
|
+
@click.option("--labels", default=DEFAULT_LABELS, metavar="LABELS")
|
|
30
|
+
@click.option("--model", default=DEFAULT_MODEL, metavar="MODEL")
|
|
31
|
+
@click.option("--cache-file", default=CACHE_FILE, metavar="CACHE_FILE")
|
|
32
|
+
@click.option("--chunk-size", default=CHUNK_SIZE, metavar="CHUNK_SIZE")
|
|
33
|
+
@click.option("--chunk-overlap",default=CHUNK_OVERLAP, metavar="CHUNK_OVERLAP")
|
|
34
|
+
@click.option("--max-tokens", default=MAX_TOKENS, metavar="MAX_TOKENS")
|
|
35
|
+
@click.option("--sprint-goals", default="", metavar="SPRINT_GOALS")
|
|
36
|
+
@click.option("--pptx-file", default=DEFAULT_PPTX_FILE_NAME, metavar="PPTX_FILE")
|
|
37
|
+
@click.option("--max-attempts", default=MAX_ATTEMPTS, metavar="MAX_ATTEMPTS")
|
|
38
|
+
def demo(
|
|
39
|
+
api_base: str,
|
|
40
|
+
access_token: str,
|
|
41
|
+
model: str,
|
|
42
|
+
command: str,
|
|
43
|
+
labels: str,
|
|
44
|
+
iteration_id: int,
|
|
45
|
+
group_id: int,
|
|
46
|
+
cache_file: str,
|
|
47
|
+
chunk_size: int,
|
|
48
|
+
chunk_overlap: int,
|
|
49
|
+
max_tokens: int,
|
|
50
|
+
sprint_goals: str,
|
|
51
|
+
pptx_file: str,
|
|
52
|
+
max_attempts: int
|
|
53
|
+
):
|
|
54
|
+
check_gitlab_credentials(access_token, api_base)
|
|
55
|
+
check_ollama(model)
|
|
56
|
+
embedding_model = HuggingFaceEmbeddings()
|
|
57
|
+
embeddings = generate_embeddings(
|
|
58
|
+
api_base,
|
|
59
|
+
access_token,
|
|
60
|
+
labels,
|
|
61
|
+
group_id,
|
|
62
|
+
iteration_id,
|
|
63
|
+
cache_file,
|
|
64
|
+
chunk_size,
|
|
65
|
+
chunk_overlap,
|
|
66
|
+
embedding_model,
|
|
67
|
+
sprint_goals
|
|
68
|
+
)
|
|
69
|
+
execute_command(command, embeddings, model, max_tokens, max_attempts, sprint_goals, pptx_file)
|
|
70
|
+
|
|
71
|
+
|
|
72
|
+
if __name__ == "__main__":
|
|
73
|
+
demo()
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
from langchain_core.pydantic_v1 import BaseModel, Field
|
|
2
|
+
from typing import List
|
|
3
|
+
|
|
4
|
+
|
|
5
|
+
class Activity(BaseModel):
|
|
6
|
+
title: str = Field(..., description="The title of the activity, summarizing the main goal or purpose of the activity.")
|
|
7
|
+
brief_desc_status: str = Field(..., description="A short and concise description of the status activity, summarizing its main status.")
|
|
8
|
+
status: str = Field(..., description="The current status of the activity, indicating its progress such as 'Planned', 'InProgress', or 'Completed'.")
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
class SprintReport(BaseModel):
|
|
12
|
+
activities: List[Activity] = Field(..., description="A list of main activities, each described with a title, brief description status, and current status, summarizing the sprint's progress.")
|
|
13
|
+
|
|
14
|
+
|
|
15
|
+
class ResponseModel(BaseModel):
|
|
16
|
+
response: str = Field(..., description="The response to the user query, providing the requested information.")
|
|
@@ -0,0 +1,9 @@
|
|
|
1
|
+
prompts = {
|
|
2
|
+
"generate_report": {
|
|
3
|
+
"question": "Generate a JSON-formatted concise report summarizing the main activities from the sprint board. For each activity, include a title, a brief status description and its current status. The JSON should be structured as a list of activities, each with `title`, 'brief_desc_status' and 'status' fields. Status examples include 'Planned', 'InProgress', 'Completed', 'Blocked', and 'Closed'. This structured report should provide a clear overview of the sprint's progress and highlight areas needing attention. Example of expected JSON output: {'activities':[{'title':'Update project documentation','brief_desc_status':'The most of the issues are resolved','status':'InProgress'},{'title':'Fix the critical bug','brief_desc_status':'The bug is fixed, but the tests are failing','status':'Blocked'}]}",
|
|
4
|
+
"additional_info": "The sprint goals were:",
|
|
5
|
+
},
|
|
6
|
+
"chat": {
|
|
7
|
+
"question": "The answer should be a JSON-formatted response to the user query. The example is {'response':'The answer to the user query'}",
|
|
8
|
+
}
|
|
9
|
+
}
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import ollama
|
|
3
|
+
from rich import print
|
|
4
|
+
from get_gitlab_issues import check_access
|
|
5
|
+
from .config import CACHE_FILE
|
|
6
|
+
|
|
7
|
+
|
|
8
|
+
def check_gitlab_credentials(token, url):
|
|
9
|
+
if not check_access(token, url):
|
|
10
|
+
print(f"[red]Invalid GitLab credentials[/red]")
|
|
11
|
+
raise Exception("Invalid GitLab credentials")
|
|
12
|
+
print("[green]GitLab credentials are valid[/green]")
|
|
13
|
+
|
|
14
|
+
|
|
15
|
+
def check_ollama(model=None):
|
|
16
|
+
try:
|
|
17
|
+
models = ollama.list()['models']
|
|
18
|
+
if models and not any(m['name'] == model for m in models):
|
|
19
|
+
print(f"[red]Invalid Ollama model[/red]")
|
|
20
|
+
raise Exception("Invalid Ollama model")
|
|
21
|
+
except Exception as e:
|
|
22
|
+
print(f"[red]Ollama is not running. Error: {e}[/red]")
|
|
23
|
+
raise Exception("Ollama is not running")
|
|
24
|
+
print("[green]Ollama model is valid[/green]")
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
def manage_cache_file(create=False, content=None):
|
|
28
|
+
if create:
|
|
29
|
+
with open(CACHE_FILE, 'w') as f:
|
|
30
|
+
f.write(content)
|
|
31
|
+
else:
|
|
32
|
+
if os.path.exists(CACHE_FILE):
|
|
33
|
+
os.remove(CACHE_FILE)
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
from setuptools import setup, find_packages
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
setup(
|
|
5
|
+
name='uis-sprint-report',
|
|
6
|
+
version='0.1.0',
|
|
7
|
+
author='Eugene Evstafev',
|
|
8
|
+
author_email='ee345@cam.ac.uk',
|
|
9
|
+
description='A Python package for generating sprint reports and managing sprint activities at University Information Services.',
|
|
10
|
+
long_description=open('README.md').read(),
|
|
11
|
+
long_description_content_type='text/markdown',
|
|
12
|
+
url='https://gitlab.developers.cam.ac.uk/ee345/demo',
|
|
13
|
+
packages=find_packages(),
|
|
14
|
+
install_requires=[
|
|
15
|
+
'click>=8.1.7',
|
|
16
|
+
'rich>=13.7.1',
|
|
17
|
+
'python-pptx>=0.6.23',
|
|
18
|
+
'huggingface-hub==0.23.4',
|
|
19
|
+
'langchain==0.2.6',
|
|
20
|
+
'langchain-community==0.2.6',
|
|
21
|
+
'langchain-core==0.2.10',
|
|
22
|
+
'langchain-huggingface==0.0.3',
|
|
23
|
+
'langchain-text-splitters==0.2.2',
|
|
24
|
+
'faiss-cpu>=1.8.0',
|
|
25
|
+
'pydantic>=2.7.4',
|
|
26
|
+
'scikit-learn>=0.24.1',
|
|
27
|
+
'torch>=1.9.0',
|
|
28
|
+
'transformers>=4.42.3'
|
|
29
|
+
],
|
|
30
|
+
classifiers=[
|
|
31
|
+
'License :: OSI Approved :: MIT License',
|
|
32
|
+
'Development Status :: 4 - Beta',
|
|
33
|
+
'Intended Audience :: Developers',
|
|
34
|
+
'Programming Language :: Python :: 3',
|
|
35
|
+
'Operating System :: OS Independent',
|
|
36
|
+
],
|
|
37
|
+
python_requires='>=3.7',
|
|
38
|
+
)
|
|
File without changes
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
import pytest
|
|
2
|
+
from unittest.mock import patch, MagicMock
|
|
3
|
+
from click.testing import CliRunner
|
|
4
|
+
from demo.main import demo
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
@pytest.fixture
|
|
8
|
+
def runner():
|
|
9
|
+
return CliRunner()
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
def test_valid_command_execution(runner):
|
|
13
|
+
with patch("demo.main.check_gitlab_credentials") as mock_check_gitlab_credentials, \
|
|
14
|
+
patch("demo.main.check_ollama") as mock_check_ollama, \
|
|
15
|
+
patch("demo.main.generate_embeddings") as mock_generate_embeddings, \
|
|
16
|
+
patch("demo.main.execute_command") as mock_execute_command, \
|
|
17
|
+
patch("demo.main.HuggingFaceEmbeddings") as mock_hugging_face_embeddings:
|
|
18
|
+
|
|
19
|
+
mock_hugging_face_embeddings.return_value = MagicMock()
|
|
20
|
+
mock_generate_embeddings.return_value = "mock_embeddings"
|
|
21
|
+
|
|
22
|
+
result = runner.invoke(demo, [
|
|
23
|
+
"--api-base", "http://api.gitlab.com",
|
|
24
|
+
"--access-token", "valid_token",
|
|
25
|
+
"--command", "report",
|
|
26
|
+
"--model", "test_model",
|
|
27
|
+
"--group-id", "123",
|
|
28
|
+
"--iteration-id", "456",
|
|
29
|
+
"--labels", "bug",
|
|
30
|
+
"--sprint-goals", "Complete all tasks"
|
|
31
|
+
])
|
|
32
|
+
|
|
33
|
+
assert result.exit_code == 0
|
|
34
|
+
mock_check_gitlab_credentials.assert_called_once_with("valid_token", "http://api.gitlab.com")
|
|
35
|
+
mock_check_ollama.assert_called_once_with("test_model")
|
|
36
|
+
mock_generate_embeddings.assert_called_once()
|
|
37
|
+
mock_execute_command.assert_called_once()
|
|
38
|
+
|
|
39
|
+
|
|
40
|
+
def test_error_handling_invalid_credentials(runner):
|
|
41
|
+
with patch("demo.main.check_gitlab_credentials", side_effect=Exception("Invalid credentials")), \
|
|
42
|
+
patch("demo.main.print") as mock_print:
|
|
43
|
+
|
|
44
|
+
result = runner.invoke(demo, [
|
|
45
|
+
"--api-base", "http://api.gitlab.com",
|
|
46
|
+
"--access-token", "invalid_token"
|
|
47
|
+
])
|
|
48
|
+
|
|
49
|
+
assert result.exit_code != 0
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
import pytest
|
|
2
|
+
from unittest.mock import patch, MagicMock
|
|
3
|
+
from demo.embeddings import generate_embeddings
|
|
4
|
+
from langchain_huggingface import HuggingFaceEmbeddings
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
def test_generate_embeddings_integration():
|
|
8
|
+
embedding_model = HuggingFaceEmbeddings()
|
|
9
|
+
with patch("demo.embeddings.get_gitlab_issues") as mock_get_gitlab_issues, \
|
|
10
|
+
patch("demo.utils.manage_cache_file") as mock_manage_cache, \
|
|
11
|
+
patch("langchain_community.document_loaders.TextLoader") as mock_text_loader, \
|
|
12
|
+
patch("langchain_text_splitters.CharacterTextSplitter") as mock_splitter:
|
|
13
|
+
mock_issue = MagicMock(attributes={'title': 'Bug Fix', 'description': 'Fix a critical bug'})
|
|
14
|
+
mock_get_gitlab_issues.return_value = [mock_issue]
|
|
15
|
+
mock_text_loader.return_value.load.return_value = "Bug Fix Fix a critical bug"
|
|
16
|
+
mock_splitter.return_value.split_documents.return_value = ["Bug Fix", "Fix a critical bug"]
|
|
17
|
+
|
|
18
|
+
result = generate_embeddings(
|
|
19
|
+
api_base="http://api.gitlab.com",
|
|
20
|
+
access_token="valid_token",
|
|
21
|
+
labels="bug",
|
|
22
|
+
group_id=123,
|
|
23
|
+
iteration_id=456,
|
|
24
|
+
cache_file=".cache",
|
|
25
|
+
chunk_size=100,
|
|
26
|
+
chunk_overlap=20,
|
|
27
|
+
embedding_model=embedding_model,
|
|
28
|
+
sprint_goals="Complete all tasks"
|
|
29
|
+
)
|
|
30
|
+
|
|
31
|
+
mock_get_gitlab_issues.assert_called_once_with("valid_token", "http://api.gitlab.com", 123, "bug", 456)
|
|
32
|
+
assert result is not None
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
import pytest
|
|
2
|
+
from unittest.mock import patch
|
|
3
|
+
from demo.utils import check_gitlab_credentials, check_ollama
|
|
4
|
+
|
|
5
|
+
def test_check_gitlab_credentials_valid():
|
|
6
|
+
with patch("demo.utils.check_access", return_value=True), patch("demo.utils.print") as mock_print:
|
|
7
|
+
check_gitlab_credentials("valid_token", "http://gitlab.com")
|
|
8
|
+
mock_print.assert_called_with("[green]GitLab credentials are valid[/green]")
|
|
9
|
+
|
|
10
|
+
def test_check_gitlab_credentials_invalid():
|
|
11
|
+
with patch("demo.utils.check_access", return_value=False), patch("demo.utils.print") as mock_print:
|
|
12
|
+
with pytest.raises(Exception) as exc_info:
|
|
13
|
+
check_gitlab_credentials("invalid_token", "http://gitlab.com")
|
|
14
|
+
assert str(exc_info.value) == "Invalid GitLab credentials"
|
|
15
|
+
mock_print.assert_called_with("[red]Invalid GitLab credentials[/red]")
|
|
16
|
+
|
|
17
|
+
def test_check_ollama_valid_model():
|
|
18
|
+
with patch("ollama.list", return_value={'models': [{'name': 'test_model'}]}), patch("demo.utils.print") as mock_print:
|
|
19
|
+
check_ollama(model="test_model")
|
|
20
|
+
mock_print.assert_called_with("[green]Ollama model is valid[/green]")
|
|
21
|
+
|
|
22
|
+
def test_check_ollama_invalid_model():
|
|
23
|
+
with patch("ollama.list", return_value={'models': [{'name': 'test_model'}]}), patch("demo.utils.print") as mock_print:
|
|
24
|
+
with pytest.raises(Exception) as exc_info:
|
|
25
|
+
check_ollama(model="invalid_model")
|
|
26
|
+
assert str(exc_info.value) == "Ollama is not running"
|
|
27
|
+
|
|
28
|
+
def test_check_ollama_service_down():
|
|
29
|
+
with patch("ollama.list", side_effect=Exception("Service Down")), patch("demo.utils.print") as mock_print:
|
|
30
|
+
with pytest.raises(Exception) as exc_info:
|
|
31
|
+
check_ollama()
|
|
32
|
+
assert str(exc_info.value) == "Ollama is not running"
|
|
33
|
+
mock_print.assert_called_with("[red]Ollama is not running. Error: Service Down[/red]")
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
Metadata-Version: 2.1
|
|
2
|
+
Name: uis-sprint-report
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: A Python package for generating sprint reports and managing sprint activities at University Information Services.
|
|
5
|
+
Home-page: https://gitlab.developers.cam.ac.uk/ee345/demo
|
|
6
|
+
Author: Eugene Evstafev
|
|
7
|
+
Author-email: ee345@cam.ac.uk
|
|
8
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
9
|
+
Classifier: Development Status :: 4 - Beta
|
|
10
|
+
Classifier: Intended Audience :: Developers
|
|
11
|
+
Classifier: Programming Language :: Python :: 3
|
|
12
|
+
Classifier: Operating System :: OS Independent
|
|
13
|
+
Requires-Python: >=3.7
|
|
14
|
+
Description-Content-Type: text/markdown
|
|
15
|
+
License-File: LICENSE
|
|
16
|
+
|
|
17
|
+
# Project Overview
|
|
18
|
+
|
|
19
|
+
This project leverages the capabilities of the LangChain library to interact with GitLab issues and generate sprint report presentations. The system retrieves issue data from a specified GitLab group, processes the information, and summarizes the status of sprint goals in a PowerPoint presentation.
|
|
20
|
+
|
|
21
|
+
## Features
|
|
22
|
+
|
|
23
|
+
- Retrieve issues from GitLab using a specific label and iteration ID.
|
|
24
|
+
- Analyze and categorize issues based on sprint goals.
|
|
25
|
+
- Generate summaries and status reports for sprint goals.
|
|
26
|
+
- Create PowerPoint presentations to visually present the sprint report.
|
|
27
|
+
|
|
28
|
+
## Prerequisites
|
|
29
|
+
|
|
30
|
+
Before you can run this project, you need to have Python installed on your machine. Python 3.6 or higher is recommended. You also need to ensure that Git is installed if you need to clone the repository.
|
|
31
|
+
|
|
32
|
+
## Installation
|
|
33
|
+
|
|
34
|
+
1. **Clone the Repository**:
|
|
35
|
+
```bash
|
|
36
|
+
git clone https://gitlab.developers.cam.ac.uk/ee345/demo.git
|
|
37
|
+
cd demo
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
2. **Install Required Libraries**:
|
|
41
|
+
Ensure you have `pip` installed and then run:
|
|
42
|
+
```bash
|
|
43
|
+
pip install -r requirements.txt
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
3. **Download and Setup Ollama Model**:
|
|
47
|
+
Download the Ollama model from [Ollama Download](https://ollama.com/download) and [Mistral Model](https://ollama.com/library/mistral). Follow the instructions on the Ollama website for setting up the model.
|
|
48
|
+
|
|
49
|
+
## Usage
|
|
50
|
+
|
|
51
|
+
To run the script, use the following command. Replace the placeholder values with actual data like the GitLab access token and other parameters as required:
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
python main.py --token YOUR_GITLAB_ACCESS_TOKEN --gitlab_url https://gitlab.developers.cam.ac.uk/api/v4 --iteration_id=368 --goals "Front End Error Reporting,Gain access to account data (held in Entra ID and in our own systems) to start to understand user breakdown/profiles between Raven/Azure"
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
### Parameters Description
|
|
58
|
+
|
|
59
|
+
- `--token`: GitLab access token for authentication. This is necessary to access the GitLab API securely.
|
|
60
|
+
- `--gitlab_url`: The base URL of your GitLab instance API. Default is `https://gitlab.developers.cam.ac.uk/api/v4`.
|
|
61
|
+
- `--group_id`: The ID of the GitLab group from which issues are to be fetched. Default is `5`.
|
|
62
|
+
- `--labels`: URL-encoded string of labels used to filter issues by specific criteria. Default is `team%3A%3AIdentity`.
|
|
63
|
+
- `--iteration_id`: The ID of the specific iteration to filter issues relevant to a particular sprint. Leave empty if not using iteration-based filtering.
|
|
64
|
+
- `--goals`: A comma-separated list of sprint goals to analyze. Each goal should be clearly defined.
|
|
65
|
+
- `--presentation_name`: The name of the output PowerPoint file where the sprint report will be saved. Default is `demo.pptx`.
|
|
66
|
+
- `--chunk_size`: The size of text chunks in characters when splitting documents for processing. Default is `500`.
|
|
67
|
+
- `--chunk_overlap`: The overlap of text chunks in characters when splitting documents. Default is `0`.
|
|
68
|
+
- `--search_type`: The type of search to perform when retrieving documents. Default is `mmr` which stands for Maximal Marginal Relevance.
|
|
69
|
+
- `--search_kwargs`: Additional keyword arguments in JSON format to configure the search behavior. Default is `{"k": 8}`, where `k` is the number of documents to retrieve.
|
|
70
|
+
- `--cache_folder`: The directory to use for caching data such as embeddings. Default is `cache`.
|
|
71
|
+
- `--model`: The language model to use, specified by name. Default is `mistral`.
|
|
72
|
+
- `--max_tokens`: The maximum number of tokens to generate from the language model in a single request. Default is `1500`.
|
|
73
|
+
|
|
74
|
+
## Contributing
|
|
75
|
+
|
|
76
|
+
Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change.
|
|
77
|
+
|
|
78
|
+
## License
|
|
79
|
+
|
|
80
|
+
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE) file for details.
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
LICENSE
|
|
2
|
+
README.md
|
|
3
|
+
setup.py
|
|
4
|
+
demo/__init__.py
|
|
5
|
+
demo/config.py
|
|
6
|
+
demo/demo.py
|
|
7
|
+
demo/embeddings.py
|
|
8
|
+
demo/main.py
|
|
9
|
+
demo/models.py
|
|
10
|
+
demo/prompts.py
|
|
11
|
+
demo/utils.py
|
|
12
|
+
tests/__init__.py
|
|
13
|
+
tests/test_demo.py
|
|
14
|
+
tests/test_embeddings.py
|
|
15
|
+
tests/test_utils.py
|
|
16
|
+
uis_sprint_report.egg-info/PKG-INFO
|
|
17
|
+
uis_sprint_report.egg-info/SOURCES.txt
|
|
18
|
+
uis_sprint_report.egg-info/dependency_links.txt
|
|
19
|
+
uis_sprint_report.egg-info/requires.txt
|
|
20
|
+
uis_sprint_report.egg-info/top_level.txt
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
click>=8.1.7
|
|
2
|
+
rich>=13.7.1
|
|
3
|
+
python-pptx>=0.6.23
|
|
4
|
+
huggingface-hub==0.23.4
|
|
5
|
+
langchain==0.2.6
|
|
6
|
+
langchain-community==0.2.6
|
|
7
|
+
langchain-core==0.2.10
|
|
8
|
+
langchain-huggingface==0.0.3
|
|
9
|
+
langchain-text-splitters==0.2.2
|
|
10
|
+
faiss-cpu>=1.8.0
|
|
11
|
+
pydantic>=2.7.4
|
|
12
|
+
scikit-learn>=0.24.1
|
|
13
|
+
torch>=1.9.0
|
|
14
|
+
transformers>=4.42.3
|